An optimal FPT algorithm parametrized by treewidth for Weighted-Max-Bisection given a tree decomposition as advice assuming SETH and the hardness of MinConv
AAn optimal FPT algorithm parametrized by treewidth for
Weighted-Max-Bisection given a tree decompositionas advice assuming SETH and the hardness of
MinConv
Brinkop, Hauke [email protected]
Jansen, Klaus [email protected]
Weißenfels, Tim [email protected]
Department of Computer Science, Kiel University, Kiel, Germany
Friday 13 th November, 2020
Abstract
The weighted maximal bisection problem is, given an edge weighted graph, to find abipartition of the vertex set into two sets such that their cardinality differs by at mostone and the sum of the weight of the edges between vertices that are not in the same setis maximized. This problem is known to be NP-hard, even when a tree decomposition ofwidth t and O ( n ) nodes is given as an advice as part of the input, where n is the numberof vertices of the input graph. But, given such an advice, the problem is decidable in FPTtime in n parametrized by t . In particular Jansen et al. presented an algorithm with runningtime O (2 t n ). Hanaka, Kobayashi, and Sone enhanced the analysis of the complexity to O (2 t ( nt ) ). By slightly modifying the approach, we improve the running time to O (2 t n )in the RAM model, which is asymptotically optimal in n under the hardness of MinConv .We proof that this is also asymptotically optimal in its dependence on t assuming SETHby showing for a slightly easier problem (maximal cut) that there is no O (2 εt poly n ) algo-rithm for any ε < O (2 t n − ε ) algorithm for any ε >
0) for a broadfamily of subclasses of the weighted maximal bisection problem that are characterized onlyby the dependence of t from n , more precisely, all instances with t = f ( n ) for an arbitrarybut fixed f ( n ) ∈ o(log n ). This holds even when only considering planar graphs.Moreover we present a detailed description of the implementation details and assump-tions that are necessary to achieve the optimal running time. Introduction
One well known combinatorial graph problem is the weighted maximal cut problem that is todecide wether a given non-negative-weighted undirected graph has a cut greater or equal a givenvalue. The formal definition of a cut is as follows:
Definition 1 (Cut, Bisection) . Let G = ( V, E ) be an undirected graph. A tuple ( V , V )is called a cut of G iff V ∩ V = ∅ and V = V ] V . If additionally − ≤ | V | − | V | ≤ V , V ) is called a bisection of G Definition 2 (Size of a cut) . Let G = ( V, E, w ) be an undirected, non-negative-weightedgraph. The size of a cut ( V , V ) is given bysize w ( V , V ) = X v ∈ V v ∈ V v v ∈ E w ( v v )The problem maximal bisection is then defined analogously to maximal cut by simply substituting“cut” with “bisection”. We will later provide a detailed encoding of the problem together withthe underlying machine model. From here we assume that weights are always non-negative if notstated otherwise. Moreover, we assume that problems are weighted unless stated otherwise. Hardness assumptions
Before discussing the complexity of variants of the just defined prob-lems, we introduce the complexity assumptions the hardness results are based on. The first oneis a classical assumption:
Hypothesis 3 (Strong Exponential Time Hypothesis (SETH)) . Let k - sat be the problemto decide if a formula, given in CNF with n variables, where each clause has at most k literals, has a satisfying variable assignment. Then it holds that for any ε < k such that k - sat cannot be solved in time O (2 εn ).The other assumption is standard in another area: fine grained complexity. It bounds the runningtime of MinConv , which is defined as follows (where [ n − = { i ∈ N : 0 ≤ i ≤ n − } ): Problem 4 ( MinConv ) .Input : Sequences ( a i ) i ∈ [ n − ∈ Q [ n − , ( b i ) i ∈ [ n − ∈ Q [ n − Output : A sequence ( c i ) i ∈ [ n − where for all k ∈ [ n − it holds that c i = min { a i + b j : i ∈ [ n − , j ∈ [ n − , k = i + j } It is notable that
MinConv is a search problem , while sat is a decision problem . There is amaximization variant called
MaxConv that is equivalent to
MinConv by simply multiplyingall values of the sequences by −
1. The mentioned conjecture now it that the trivial approach intime O ( n ) for this problem is basically optimal: Hypothesis 5 ( MinConv hardness hypothesis (MCH) [4, Conj. 1, p. 1]) . MinConv cannotbe solved in O ( n − ε ) for any ε > elated work Simple-Max-Cut is one of Karp’s 21 NP-complete problems [12] and thus
Weighted-Max-Cut is also NP-hard. While the minimization variant of
Weighted-Max-Cut can be solved in polynomial time in general [16] and
Weighted-Max-Cut can be solvedin polynomial time on planar graphs [9, 17], this does not apply to the bisection variants. Inparticular,
Weighted-Max-Cut is NP-hard even on planar graphs [11] and the minimizationversion is NP-hard even on d -regular graphs for fixed d [2]. Moreover both are NP-hard onunit disk graphs [5, 6]. Hanaka, Kobayashi, and Sone showed that Weighted-Max-Bisection is NP-hard for split graphs, comparability graphs, AT-free graphs, bipartite, co-bipartite, andclaw-free graphs and the minimization variant is NP-hard for split graphs, and co-comparabilitygraphs [10]. The complexity of
Weighted-Max-Bisection on planar graphs is still an openquestionJansen et al. showed that both bisection problems can be solved in polynomial time on graphswith fixed treewidth [11], given a treewidth-minimal tree decomposition with the number ofnodes beeing linear in the number of vertices as an advice. The running time in [11] is O (2 t n )for treewidth t and n vertices. Improving the analysis but without changing the algortihm, [10]achieved a running time of O (2 t ( tn ) ). They claim that this running time is optimal in itsdependence on t under SETH, but the corresponding proof is significantly incorrect at at leasttwo deductions.If all weights are integers between 1 and some value W , Weighted-Max-Bisection can besolved in O (8 t ( tW ) O (1) n . log n ) time by formulating parts of the problem as MaxConv and using a truly subquatradic algorithm for a special case that follows from the restriction onthe weights [8]. Note that this implies that in particular the maximal bisection problem withunweighted edges, that is, every edge has weight 1, is solvable in O (8 t t O (1) n . log n ). In thesame paper it was also shown that if Weighted-Max-Bisection can be solved in time O ( f )on weighted trees, so can MinConv , which is considerd unlikely to be computable in O ( n − ε )for any ε > Weighted-Max-Bisection onweighted paths can be solved in O ( n − ε ) for some ε > MinConv can be solved in O ( n − δ )for some δ > ε < Weighted-Max-Cut (and thus
Weighted-Max-Bisection ) in time O (2 εt poly | I | )for an instance I that has treewidth t where | I | denotes the size of the encoding of the instance[15]. Our contribution
We show that the algorithm in [10] to decide
Weighted-Max-Bisection with advice with running time O (2 t ( tn ) ) is not optimal under SETH in terms of the depen-dence on t thus contradicting the claim of Hanaka, Kobayashi, and Sone. We show this bypresenting an algorithm with running time O (2 t n ). Moreover, we present the details that animplementation needs to be aware of in order to achieve the desired running time. This includesthe discussion of the machine model, encoding of the problem, encoding and limitation of theadvice, proper preprocessing, set operations, and handling arrays indexed by sets. Since 1 .
864 is a rounded value of the root of a polynomial, one could also use the rounding error to compensatethe log factor yielding a running time of O (8 t ( tW ) O (1) n . ) We assume that is a consequence of the following erroneous statement that is implicitly used in [10, Theorem4, p.7]: ∀ f. (cid:0) ∃ ε > . f ( n, t ) ∈ O ((2 − ε ) t · n ) (cid:1) ⇐ = (cid:0) ∃ ε > . f ( n, t ) ∈ O (2 t − ε · n ) (cid:1) heorem 6. There is an algorithm that decides in time O (2 t n ) if a weighted graph with n vertices has a bisection greater equal than the input parameter γ , when a tree decompositionwith ≤ n + 1) nodes and treewidth t is given as an advice.Note that we assume that the tree decomposition has O ( n ) nodes, which is also implicitly madein [10, 11]. Such a decomposition does always exist [14, Lemma 13.1.2, p. 149]. We are neitheraware of the complexity of the conversion of an arbitrary tree decomposition to one with O ( n )nodes nor of a way to ensure this property while the construction.Assuming SETH we show that the running time of our algorithm is optimal in its dependenceon t when the depences on n is not allowed to be superpolynomial. We do so by showing thatthis already holds for the maximal cut problem with advice. This was already claimed in [10]but without a correct proof. Theorem 7.
There is no O (2 εt poly n ) algorithm to decide Weighted-Max-Cut for any ε <
1, where n is the number of vertices of the graph, even if an advice in form of a treedecomposition with width t and 4( n + 1) nodes is given as part of the input.We also present a hardness result that is somewhat orthogonal to the previously known results.While Hanaka, Kobayashi, and Sone showed the hardness (no O ( n ) algorithm assuming Min-Conv ) of specific graph classes that are mostly containing the class of trees in [10], we show thateven subclasses of planar graphs that do not contain the class of trees are hard. Note that weassume that the advice, a tree decomposition of width t with O ( n ) nodes, is part of the instancedescription itself. Theorem 8.
Let f : N → N , f ( n ) ∈ o(log n ). Let L be the set of all “yes” instances toProblem 9 restricted to instances where the corresponding graph is planar and has treewidth t = f ( n ) where n is the number of vertices of the instance. Then for all ε > O (2 f ( n ) · n − ε ) algorithm to decide L unless MCH fails, even if a tree decomposition of width t with O ( n ) nodes is given as advice. Organization of this paper
Section 2 introduces necessary formalisms and notation, alsoincluding some special notations. In Section 3 we present our algorithm and show the claimedrunning time. Then in Section 4 we show a hardness result on weighted bisection on the familyof sets that are described by the dependency of t on n , that is t = f ( n ) by some sensible function f ( n ) ∈ o(log n ). An undirected (unweighted) graph G = ( V, E ) is a set V of vertices and a set E ⊆ (cid:0) V (cid:1) , that is asubset of the set of all cardinality 2 subsets of V . An (undirected) weighted graph G = ( V, E, w )is an undirected graph (
V, E ) and a weight function w : E → Q + . For the directed variantssimply substitute (cid:0) V (cid:1) by V × V \ { ( v, v ) : v ∈ V } . If not stated otherwise, we assume graphs tobe undirected and unweighted.For a graph G = ( V, E ) (and analogously for the weighted variants), we write G − M for somesubset M ⊆ V to denote the graph ( V \ M, E ∩ (cid:0) M (cid:1) ), that is, the graph G after deleting all the They restated the statement of [15], more precisely Theorem 3: Under SETH the problem
Weighted-Max-Cut cannot be solved in time O (2 εt ) for any t >
0. They did not take into account that this hardness cannot bedirectly transfered to
Weighted-Max-Cut with advice . V that are in M . Moreover we write G (cid:22) M := G − ( V \ M ) for the restriction of G to M . To simplify notation for v ∈ V , v ∈ V we write v v := { v , v } to describe a (possible)edge.A graph that is connected and has no cycles is called a tree . T = ( V, E, r ), r ∈ V is a rootedtree with root r . For any v ∈ V we call the subtree that is obtained by removing all nodes from( V, E ) that can reach r without reaching v on every way first, the subtree induced by v . If sucha subtree is mentioned to be rooted, the root is v . For a rooted tree ( V, E, r ), we define the predecessor relation ( (cid:22) ) ∈ V × V such that v (cid:22) v iff v is in the subtree induced by v . Notethat ( (cid:22) ) is a (partial) ordering on V .We assume that the natural numbers N do not include 0 and write N = N ] { } for the naturalnumbers unioned with { } , where ] is the disjunctive union operation on sets. Let Q +0 be the non-negative numbers within Q , that is Q +0 = { x ∈ Q : x ≥ } . The set of numbers from 1 to n for an integer n ∈ N is given by [ n ] ⊆ N and [ n ] := [ n ] ] { } . For a set M we write P ( M ) forits power set.While A → B denotes a total function from sets A to B , denotes a partial function from A → B . We write ⊥ for the image of a partial function that is undefined.In context of formal languages, when considering an instance I , let | I | be the size of the encodingof the instance.The underlying machine model is the RAM model, that is a random access register machine. Inthis model there is an infinite number of memory cells that can hold unbounded integers each.Address resolution is possible in O (1) as well as all operations of the x86 instruction set whereone does not make a distinction between “registers” and “memory cells”. In contrast to
MinConv , in the context of bisection, we start with considering the decisionproblem : Problem 9 (Weighted maximal bisection) .Input : (
V, E ⊆ (cid:0) V (cid:1) , w : E → Q +0 , γ ∈ Q +0 ). Question : Does there exist a bisection ( V , V ) of G = ( V, E ) such that size w ( V , V ) ≥ γ ?Defining the formal language for a maximal bisection decision problem contains a bunch of detailsone has to be aware of. First of all, we need our language to be defined in a way such that the advice in form of a tree decomposition is part of the input. Thus we first have to formally definetree decompositions: Definition 10 (Tree decomposition) . Let G = ( V, E ) be an undirected graph, I be anarbitrary set and X = ( X i ) i ∈ I be a family of sets such that for any i ∈ I one has X i ⊆ V .Moreover let T = ( I, H ) be a tree. Then (
I, X, H ) is called a tree decomposition of G if andonly if T has the following properties:1. Node coverage : S X = V Edge coverage : ∀ v v ∈ E. ∃ i ∈ I. { v , v } ⊆ X i Coherence : ∀ v ∈ V. T − { i ∈ I : v / ∈ X i } is connected Some instructions might need some smaller modification on where to store the result, as they originally storethe result in a hardcoded register that is not part of its “argument”. Also some details regarding the representationof negative numbers might have to be clearified or changed, but this does not matter for this paper as we onlyuse positive values.
5y convention the nodes of the graph G are called vertices while the nodes of the tree are justcalled nodes . For a node i ∈ I , the set X i is called a bag . If the node set of a decomposition issufficiently small, more precisely if | I | ≤ · ( | V | + 1), we call the decomposition small .A tree decomposition has a specific width, called the treewidth of the decomposition. Definition 11 (Treewidth) . Let G = ( V, E ) be an undirected graph. The treewidth of atree decomposition ( I, X, F ) of G is given by max i ∈ I | X i | −
1. The treewidth of the graph G ,often simply denoted by treewidth , is the minimal treewidth of a tree decomposition alongall tree decompositions of G .Not only computing a tree decomposition that is minimal in terms of treewidth is NP-hard butalso computing the value of the (minimal) treewidth of a graph.[1]We will assume that vertex sets are always given by the unary encoding of their cardinality, thatis, V = [ | V | ]. This assumption is reasonable as again one can transform any input in this formby some simple preprocessing steps , but will mostly be able to do the transformation whileconstructing the input graph. The unary encoding is reasonable as a set is usually encoded bylisting all its members, thus the intuitive encoding of V would also require Θ( n ) space.We now need to carefully define the encoding of the instances which also includes the encodingof the advice. Note that the exact form is crucial for the running times one can achieve and isalso strongly correlated the the underlying machine model. Definition 12 ( Adv
STD -Max-Bisection ) . The language
Adv
STD -Max-Bisection con-sists of all instances to Problem 9 where the answer to the problems is ”yes“, given anadditional hint in form of a small tree decomposition. The encoding is as follows: V = [ | V | ]is represented by an unary encoded integer of size | V | , E and w are encoded as a set oftriples ( v , v , w ( v v )) where v < v . Therefor every triple is encoded in 3 memory cells,one for each entry, and the set of triples is “encoded” by simply writing triples after eachother in any order. The treewidth t is stored in one cell of memory. The tree decomposition D = ( I, X, H ) is encoded as follows: Again, I = [ | I | ] is represented by an unary encodedinteger of size | I | . For any i ∈ I , the bag X i is encoded by a increasing sequence of thevertices v ∈ X i . The edge set H is encoded analogously to the input graph.The language Adv
STD -Max-Cut can be defined analogously. The languages
Max-Bisection and
Max-Cut are defined the same way but without the treewidth and treedecomposition asinput, that is without advice.It is reasonable to assume that tree decompositions are restricted in the number of nodes by O ( n ), as otherwise some preprocessing can reduce the tree decomposition into such a form. Notethat this assumption is important, as otherwise the size of the encoding of a tree decompositionmight exceed the overall running time we want to achieve, implying that one cannot completelyread the decomposition in our algorithm.Formally one would also need to state how to recognize where the edge set ends etc., but sincethis can be done using a standard approach, we omit those details.Instead of working with the advice we are given directly, we first convert it into another formthat is easier to handle: for example, use an AVL-tree for renaming and rename every element of the input in time O ( | I | log n + n log n ) efinition 13 (Nice Tree Decomposition) . Let G = ( V, E ) be an undirected graph, (
I, X, H )a tree decomposition of G and T = ( I, H, r ) a rooted tree. Then we call (
I, X, H, r ) a nicetree decomposition if and only if for any i ∈ I the node i is of one of the following forms:1. Leaf node : i has no child node in T that is, i is a leaf of the tree T Introduce node : i has exactly one child j ∈ I in T and X i = X j ] { v } for some v ∈ V \ X j that is, i introduces a new vertex v ∈ V \ X j Forget node : i has exactly one child node j ∈ I in T and X i ] { v } = X j for some v ∈ X j that is, i forgets a vertex from X j Join node : i has exactly two child nodes j ∈ I , k ∈ I , j = k , in T such that X i = X j = X k , joining two branches of the tree T Note that this implies that (
I, H ) is a binary tree.The conversion of a tree decomposition into a nice tree decomposition can be done in time O ( nt )as long as the number of nodes of the decomposition is at most linear in the number of vertices. Lemma 14 ([13, Lemma 13.1.3, p. 150]) . Given a small tree decomposition of a graph G with width t , one can find a nice tree decomposition of G with width t and with at most 4 n nodes in O ( nt ) time, where n is the number of vertices of G . In this section we present the algorithm and analyze its running time. Implementation detailsregarding set operations and preprocessing can be found in Subsection 3.2. Until then we assumethat we are able to put an element in a new set, intersect and union sets in time O (1) and areable to choose an element of a set in O ( t ). Moreover, we assume that the power sets are alwaysin non-decreasing order in terms of cardinality and that constructing and ordering them does notneed any relevant extra time. When iterating over a subset S of vertices within a bag, we assumethat we have t candidates to check, so the running time is O ( | S | · time-for-function-body + O ( t )).Let G = ( V, E ) be an undirected graph and (
I, X, F, r ) be a nice tree decomposition withtreewidth t of G . Moreover let F i := [ j ≺ i X j \ X i denote the set of vertices that have been forgotten when reaching i from below. Furthermore let Y i := [ j (cid:22) i X j be the set of vertices induced by the subgraph induced by the rooted subtree induced by i . Notethat F i = Y i \ X i .The objective of our algorithm is now for every i ∈ I to compute the function B i : [ | F i | ] ×P ( X i ) → Q +0 where B i ( l, S ) := max S ∈P ( F i ) | S | = l size w ( S ] S, ( Y i \ ( S ] S )) (1)7 i B i Join node O (2 t n ) per node O (2 t | F j × F k | ) per node O (2 t n ) for all nodesLeaf node O (2 t ) per nodeIntroduce node O (2 t n ) per node O (2 t | F i | ) per nodeForget node O (1) per node O (2 t | F i | ) per nodeFigure 1: Running times of the computations of B i and W i for a node i ∈ I (if there is one child i , let j be that child, and if there are two childen, let j, k be those children of i ) depending onthe node’s typethat is the size of a maximal ( | S | + l, | Y i \ S | − l )-partition of Y i .Once these values are computed we can use the following equation to determine the size of thecut of a weighted maximal bisection: MaxBisection ( G, T, X ) = max S ∈P ( X r ) l ∈ [ | F r | ] | S | + l = b| Y r | / c B r ( l, S ) (2)In order to be able to compute the values B i ( l, S ) somewhat efficiently, we specialize the com-putation for every node type. Also – as the number of entries is roughly of the same order ofmagnitude as 2 t n , we have to ensure that the needed values of size w do not take too much time.In order to ensure this, for every node i ∈ I we compute the partial function W i : P ( X i ) Q +0 that is = ⊥ for at least all necessary values of the domain needed within the computation of B i .At some point we will still need the trivial approach to compute size w , more precisely for thecases size w ( { v } , S ) for some S ∈ P ( X i ) and some v ∈ X i \ S . This can be done in O ( n ) usingthe following routine: Algorithm 1
Computing size w ( { v } , S ) function vertextweight ( v, S ) . total O ( n ) ν ← . O (1) for all z ∈ neigh ( v ) ∩ S do . O ( n + | S | · body ) if vz ∈ E then . O (1), adjacency matrix ν ← ν + w ( v, z ) . O (1), adjacency matrix end if end for return ν . O (1) end function While we will compute the values for B i bottom up along the tree, the values of W i do notdepend on each other. In the following for every node type we describe on how to compute W i and then B i efficiently, assuming that B j has already been computed for all j ∈ { k ∈ I : k ≺ i } .Figure 1 shows the running times N for the computation of W i and B i we are aiming for. Leaf node
Let i ∈ I be a leaf node of T . Since F i = ∅ the only well-defined case is l = 0. Forall S ⊆ X i we have B i (0 , S ) = W i ( S, X i \ S ) (3)8he corresponding code is Algorithm 2
Computing entries B i for a leaf node i ∈ I for all S ∈ P ( X i ) do . O (2 t · body ) B i (0 , S ) ← W i ( S, X i \ S ) . O (1) end for with the obvious running time of O (2 t ).While the computation of B i is quite easy for leaf nodes, the computation of the entries of W i isa little more complex. Note that we need to compute W i ( S, X i \ S ) for all S ∈ P ( X i ). We usethe following recursion to do so W i ( ∅ , X i ) = 0 W i ( S ] { v } , X i \ { v } ) = W i ( S, X i ) − size w ( { v } , S ) + size w ( { v } , X i \ ( S ] { v } )) (4) Algorithm 3
Computing entries of W i for a leaf node i ∈ I W i ( ∅ , X i ) ← . O (1) for all S ∈ P ( X i ) \ { ∅ , X i } do . O (2 t · body ) v ← choose ( X i \ S ) . O ( n ) . Computing a = size w ( { v } , S ) a ← vertextweight ( v, S ) . O ( n ) . Computing a = size w ( { v } , X i \ ( S ] { v } ) a ← vertextweight ( v, X i \ ( S ] { v } )) . O ( n ) for all z ∈ neigh ( v ) ∩ ( X i \ ( S ] { v } )) do . O ( n + | neigh( v ) ∩ S | · body ) if vz ∈ E then . O (1), adjacency matrix a ← a + w ( vz ) . O (1), adjacency matrix end if end for W i ( S ] { v } , X i \ { v } ) ← W i ( S, X i ) − a + a . O (1) end for The running time for one iteration of the loop lines 2 to 14 is O ( n + | neigh( v ) ∩ S | ) ⊆ O ( n ) andthus the total running of this computation is O (2 t n ). Introduce node
Let i ∈ I be an introduce node which introduces v ∈ X i and has a singlechild node j ∈ I . Because of the edge coverage and coherence properties the neighborhood of v in G − ( V \ Y i ) is entirely contained in X i . Thus for each S ∈ P ( X i ) and l ∈ [ | F i | ] we have B i ( l, S ) = ( B j ( l, S \ { v } ) + W i ( { v } , X i \ S ) v ∈ S B j ( l, S ) + W i ( { v } , S ) otherwise (5)9 lgorithm 4 Computing entries of B i for a introduce node i ∈ I for all S ∈ P ( X i ) do . O (2 t · body ) for all l ∈ [ | F i | ] do . O ( | F i | · body ) if v ∈ S then . O (1) B i ( l, S ) ← B j ( l, S \ { v } ) + W i ( { v } , X i \ S ) . O (1) else B i ( l, S ) ← B j ( l, S ) + W i ( { v } , S ) . O (1) end if end for end for The running time is dominated by the number of iterations of the inner loop (lines 2 to 8) as alllines take O (1) each execution. Thus we get a running time of O (2 t | F i | ).The computation of W i is quite easy in this case as we only have to determine the values for a fixed v : W i ( { v } , ∅ ) = 0 W i ( { v } , S ) = size w ( { v } , S ) (6)Note that one would be able to inline this computation for the needed sets between line 1 andline 2 without increasing the asymptotic running time. We did not do this in order to have thecomputation for B i of the same form for all nodes that is precomputing W i and then combiningthose values and some of B i , B j and B k (if there are children j and maybe k ) via a dynamicprogram. Algorithm 5
Computing entries of W i for a introduce node i ∈ I W i ( { v } , ∅ ) ← . O (1) for all S ∈ P ( X i ) do . O (2 t · body ) if v / ∈ S then . O (1), adjacency matrix W i ( { v } , S ) ← vertextweight ( v, S ) . O ( n ) else W i ( { v } , S ) ← . O (1) end if end for The running time for computing W i is dominated by O (2 t n ) as desired. Forget node
Let i ∈ I be a forget node with child node j ∈ I and X j \ X i = { v } . For each S ∈ P ( X i ) and l ∈ [ | F i | ] we have B i ( l, S ) = B j (0 , S ) l = 0 B j ( | F i | − , S ∪ { v } ) l = | F i | max { B j ( l, S ) , B j ( l − , S ∪ { v } ) } otherwise (7)Note that in the equation there is no occurence of a weight, thus we do not need to compute any entry of W i . 10 lgorithm 6 Computing B i for a forget node i ∈ I for all S ∈ P ( X i ) do . O (2 t · body ) B i (0 , S ) ← B j (0 , S ) . O (1) B i ( | F i | , S ) ← B j ( | F i | − , S ∪ { v } ) . O (1) for all l ∈ [ | F i | − do . O ( | F i | ) B i ( l, S ) ← max { B j ( l, S ) , B j ( l − , S ∪ { v } ) } . O (1) end for end for The running time is dominated by line 5 which is executed 2 t times, yielding a running time of O (2 t | F i | ). Join node
Let i ∈ I be a join node with child nodes j, k ∈ I and X i = X j = X k . For each S ∈ P ( X i ) and l ∈ [ | F i | ] we have B i ( l, S ) = max l + l = ll ∈ [ | F j | ] l ∈ [ | F k | ] (cid:16) B j ( l , S ) + B k ( l , S ) − size w ( S, X i \ S ) (cid:17) (8)Note that we can reuse Equation 4 and Algorithm 3 to compute the necessary entries of W i thesame way we do in leaf nodes because we are interested in the exact same entries. This can bedone in running time O (2 t n ). The computation of B i does – in contrast to the other cases – notprecisely reflect the structure of the corresponding equation. translation from the correspondingequation. This is only due to our code beeing easier to analyze: instead of fixing a l and thenrunning through all l and l such that l = l + l , we run through all l and l and set l := l + l for this iteration. Algorithm 7
Calculating entries for a join node for all S ∈ P ( X i ) do . O (2 t · body ) for all l ∈ [ | F j | ] do . O ( | F j | · body ) for all l ∈ [ | F k | ] do . O ( | F k | · body ) l ← l + l . O (1) if l ∈ [ | F i | ] then . O (1) ν ← B j ( l , S ) + B k ( l , S ) − W i ( S, X i \ S ) . O (1) if ν > B i ( l, S ) then . O (1) B i ( l, S ) ← ν . O (1) end if end if end for end for end for The running time is dominated by how often lines 4 to 9 are executed. Each execution takestime O (1), the number of executions is O (2 t | F j || F k | ) = O (2 t | F j × F k | ) for a single join node asclaimed.It remains to show that the running time for computing B i for all join nodes i ∈ I in total is O (2 t n ). Therefore we use an approach inspired by [10]: Due to the coherence property of tree11ecompositions it holds that F j ∩ F k = ∅ . Now note that | F j || F k | = | F j × F k | . For the sake of theanalysis let us annotate the set F j × F k to the join nodes i and repeat this step for all join node i ∈ I with children j, k . Note that any pair ( v , v ) of different vertices v ∈ V, v ∈ V \ { v } can be unambiguously associated with a join node i with children j and k because without lossof generality v ∈ F j and v ∈ F k and – as observed above – F j ∩ F k = ∅ . This means, if we addup the running time O (2 t | F j × F k | ) for every join node i with children j, k , we can never exceed O (2 t | V × V | ) = O (2 t n ). As the bounds listed in Figure 1 hold, we know that1. W i can be computed in time O (2 t n ) per node i ∈ I no matter the node type (note that | F i | ≤ n )2. B i can be computed in time O (2 t n ) per node i ∈ I if the node is not a join node3. B i can be computed in total time O (2 t n ) for all join nodes i ∈ I This implies that we can compute the whole table B in time O (2 t n ) if the table W is given. Thetable W can be computed in O (2 t n ) also, yielding an overall running time of O (2 t n ) to buildup B including all preprocessing steps. All that is left to do now is to use this table to computethe value of a maximal bisection followig Equation 2. Thus we end up with the following code: Algorithm 8
Computing the value of a weighted maximal bisection
Input:
A graph G = ( V, E ) with weight function w : E → Q + and a tree decomposition ( I, X, F ) Convert the tree decomposition to a nice tree decomposition (
I, X, F, r ) . O ( nt ),Lemma 14 for all i ∈ I do . O ( n · body ) Compute W i . O (2 t n ) end for Compute a topological ordering σ of the nodes I of the tree T where the edges are in inverteddirection in comparsion to the direction induced by the root r . O ( n ) for all i ∈ [ | I | ] do . overall O (2 t n ) for this loop Compute B σ ( i ) end for ν ← ⊥ for all S ∈ P ( X r ) do . O (2 t · body ) for all l ∈ [ | F r | ] do . O ( | F r | · body ) ⊆ O ( n · body ) if | S | + l = b n/ c then . O ( t ) if B r ( l, S ) > ν then . O (1) ν ← B r ( l, S ) . O (1) end if end if end for end for return ν . O (1) Formally, we have to decide wether a bisection of size ≥ γ exists, but since we compute the biggest value overall bisections, answering this question is then trivial.
12t is easy to see that the first loop (lines 2 to 4) takes time O (2 t n ) as | I | ∈ O ( n ). The secondloop (lines 6 to 8) takes time O (2 t n ) as we showed before. The third loop (lines 10 to 18) hasa nested loop which itself has a nested statement that is O ( n ). As all other lines except line 12within this loop only take O (1), the running time of the third loop is dominated by the numberof times line 12 is executed times O ( t ). This yields O (2 t nt ) ⊆ O (2 t n ). Combining all thoserunning times yield an overall running time for O (2 t n ) to compute the value of a weightedmaximal bisection and thus also to solve the corresponding decision problem. This proofs: Theorem 6.
There is an algorithm that decides in time O (2 t n ) if a weighted graph with n vertices has a bisection greater equal than the input parameter γ , when a tree decompositionwith ≤ n + 1) nodes and treewidth t is given as an advice. In the previous section we showed how one can compute the value of a weighted maximal bisectionunder the assumption that all the graph operations, set operations as well as cardinality of sets,enumerating all power sets, and storage/access of the computed values is possible in appropriatetime. In this section we provide a detailed overview on the underlying data structures, encodings,and some necessary preprocessing of the input.
Our algorithm makes use of the sets F i and X i for all i ∈ I as well as their cardinalities. Moreoverfor introduce and forget nodes we need to know the newly introduced/just forgotten node v . Itis easy to see that we can precompute those values in an appropriate time and store them in alookup table. Due to the triviality of this task we do not provide any further details. In order to be able to quickly check adjacency of different nodes, evaluate the weight of differentedges but also beeing able to efficiently iterate over the neighbourhood within X i for a v ∈ X i and a i ∈ I , we built up some lookup structure before running our algorithm. Those are standardstructures: For the whole graph, but also for the induced subgraph of every X i for every i , webuilt up an adjacency matrix M i as follows: Algorithm 9
Preprocessing the representation of the input graph/tree decomposition Create an adjacency matrix A for G . O ( n ) for all i ∈ I do . O ( | I | · body ) ⊆ O ( n · body ) Allocate an 2-dimensional array M i with size | X i | × | X i | . O ( | X i | ) ⊆ O ( t ) Initialize M i with ⊥ in every entry . O ( | X i | ) ⊆ O ( t ) for all v ∈ X i do . O ( | X i | · body ) ⊆ O ( t · body ) for all z ∈ X i do . O ( | X i | · body ) ⊆ O ( t · body ) if v = z then . O (1) M i ( v, z ) ← A ( v, z ) . O (1) M i ( z, v ) ← M i ( v, z ) . O (1) end if end for end for end for O ( nt + n ) ⊆ O (2 t n ) time. In order to get the desired running time we need to be able to execute specific set operations insufficiently small running time. In this section we specify how one can do this in detail, mostlyby just selecting an approriate representation and some preprocessing with a running time thatdoes not increase the overall asymptotic running time O (2 t n ).For i ∈ I , we represent a set S ∈ P ( X i ) by a bit vector with the j -th bit beeing one if andonly if X ( j ) i ∈ S where X ( j ) i is the j -th smallest vertex in X i (which is the j -th vertex in theencoding of X i ). Other subsets do not occur within our algorithm. We can then intersect twosets S ∈ P ( X i ), S ∈ P ( X i ) by combining the corresponding vectors using the bitwise AND(&) operation. On the RAM model this operation takes time O (1). Analogously we can definethe union of two sets by using a bitwise OR ( | ) instead. Membership of v ∈ X i for S ∈ P ( V )can be computed as follows ( (cid:28) denots a left shift): Algorithm 10
Computing the element function function elem ( v , S ) return S ∩ (1 (cid:28) ( v − end function The idea is that 1 (cid:28) ( v −
1) creates a bit vector that is zero except in the component v thusrepresenting { v } , and then using the intersection routine that returns a true value (a value = 0)iff the intersection is non-empty.For the computation of B i for a join node i ∈ I we also need a operation choose that gives usany element v ∈ S for a non-empty set S ∈ P ( X i ) \ { ∅ } . Instead of preprocessing, which wouldallow us to perform this operation in O (1) time, we use a simpler approach here that is sufficientfor our case: We just keep right shifting the bit vector until we find a non-zero bit. Algorithm 11
Computing the selection of any element of a set function choose ( S ) Require: S = ∅ j ← while S & 1 = 0 do S ← S (cid:29) j ← j + 1 end while return X ( j ) i end function The running time is – obviously – O ( t ) as the length of a bit vector is upper bounded by the sizeof the universe | X i | < n . Until now we assumed that we can build up an array that has the power set of some set as itsset of indices. Naturally such “arrays” do not exist but can be represented by some dictionary14ata structure, that is a structure that allows efficient access to element indexed by some keytype. For general data structures of this type, we cannot perform every operation in O (1). Thuswe need to be able to somehow have an injective map from the elements of a given power set toa space polynomial in the cardinality of the power set that allows O (1) access. We can do thisby using the encoding of a set as bitvector as the key itself, that is, we interpret the bit vectoras integer. Doing so leaves one problem: In our algorithms, we use a set S to access the entriesof different nodes i, j ∈ I that have (potentially) different X i = X j , which implies that the setencoding of one node i is not necessarily compatible to that of j . Hence we need to provide someconversion between those representations.We can do this in O (1) even though one can observe that O ( n ) would also be sufficient. Let i ∈ I and j, k ∈ I be the child node(s) if they exist. We again consider the different node typesseparately, that is, we make a case distinction over the type of i : • Leaf node : Leaf nodes do not have any children thus no conversion is needed at all. • Introduce node : Let { v } = X i \ X j , that is, v is the newly introduced vertex in i . Let m ∈ N such that X ( m ) i = v . Then the conversion of any set S ∈ P ( X i ) can be done bysimply removing the m -th bit: Algorithm 12
Bit vector conversion at an introduce node i ∈ I with child j ∈ I Require: v / ∈ S . Mask that has every bit from 1 to m − µ ← (1 (cid:28) ( m − − . Mask that has every bit from m to | X j | set to 1 µ ← (1 (cid:28) | X j | ) − − µ . Get the first part of the old bit vector ν ← S & µ . Get the second part of the old bit vector ν ← S & µ . Remove the bit for v ν (cid:29) . Glue them together S ← ν | ν • Forget node : Let { v } = X j \ X i , that is, v is the vertex that just has been forgotten at i . Let m ∈ N such that X ( m ) j = v . Then the conversion of any set S ∈ P ( X i ) can be doneby simply inserting a bit at the m -th position:15 lgorithm 13 Bit vector conversion at an forget node i ∈ I with child j ∈ I Require: v / ∈ S . Mask that has every bit from 1 to m − µ ← (1 (cid:28) ( m − − . Mask that has every bit from m to | X j | set to 1 µ ← (1 (cid:28) | X j | ) − − µ . Get the first part of the old bit vector ν ← S & µ . Get the second part of the old bit vector ν ← S & µ . Add the bit for v ν (cid:28) . Glue them together S ← ν | ν • Join node : As X i = X j = X k , the identity does the job. It is obvious that one can enumerate all’sets of the power set of X i for i ∈ I by simply listing allintegers in [0 , |P ( X i ) | [. This is not sufficient for our cause as we assumed that any enumeration ofsets is done such that the sets are non-decreasing regarding their cardinality. Instead of providinga constructive method to enumerate the power set in that way, we simply observe that we canenumerate all sets in O (2 t ) and then sort them using some standard algorithm, which yieldsa running time of O (2 t t ) ⊆ O (2 t n ). Doing so for all i ∈ X i yields an overall running time of O (2 t n ). All the details we provided are kept as simple as possible, that is, we tried to find the simplestapproach that does not increase the overall running time. For the sake of simplicity, many detailsare less efficient than possible. For example in line Algorithm 12 in Algorithm 8 we take time O ( t ) to compute the cardinality of the set S , which could easily be decreased to O (1) by simplyannotating the cardinality during the construction process of the power set. Note that using the same algorithm but annotating for every entry in the table B which valuesof the children has been used we can also compute the maximal weighted bisection itself, thatis the partition of the vertex sets with maximal value. It is easy to see that the time needed forsuch annotations is always dominated by the computation of the entry we add the annotationto. To show the optimality in the dependence on t we show that even Adv
STD -Max-Cut (whichis at most as hard as
Adv
STD -Max-Bisection ) cannot be solved in a better running time in t assuming SETH unless its running time gets superpolynomial in n :16 heorem 7. There is no O (2 εt poly n ) algorithm to decide Adv
STD -Max-Cut for any ε <
Adv
STD -Max-Bisection : Corollary 15.
There is no O (2 εt poly n ) algorithm to decide Adv
STD -Max-Bisection forany ε < Proof.
Consider any graph. For every input vertex, add one isolated vertex. The treewidth doesnot increase. If there is a small tree decomposition given as advice, modifiying it to contain thenew vertices is not hard (and obviously possible in time polynomial in n ). We did not changethe edge set, thus the possible values of cuts do not change. It is easy to see that the resultinginstance is in Adv
STD -Max-Bisection iff the original instance is in
Adv
STD -Max-Cut .To show Theorem 7 we might be tempted to reuse the result of Lokshtanov, Marx, and Saurabhin [15]:
Theorem 16 ([15, Thm. 3, p. 9]) . There is no O ((2 − ε ) t poly | I | ) algorithm for any ε > Max-Cut unless SETH fails.It is worth noting that it is equivalent to state that there is no O (2 δt poly | I | ) algorithm for any δ < − ε ) t = 2 ld(2 − ε ) t . Of course, as | I | ≥ n , this also denies the existence of a O (2 δt poly n )algorithm for any δ < Adv
STD -Max-Cut is a problem at most as hard as
Max-Cut , thus we cannot transfer the statement directly. There is also little hope to finde areduction from
Max-Cut to Adv
STD -Max-Cut , as arbitrary graphs occur in the instances andcomputing the treewidth is already NP-hard, thus a polynomial reduction is possible iff P = NP.As this is considered very unlikely, we work around this issue: Instead of adding a secondreduction, we adjust the existing reduction in order to achieve the desired result. Therefore weneed the following statement; Lemma 17.
Let G = ( V, E ) and S ⊆ V . If G − S is a forest, then (given S ) one can computea small tree decomposition of G that has width ≤ | S | + 1 in time O (poly( | E | + | V | )). Proof.
Fix G = ( V, E ) and S . Note that it is trivially possible to compute a small tree decom-position of a tree in time linear in its nodes and edges. Compute the tree decompositions for allconnected components in G := G − S (that are trees by assumption). Rename the node sets(and all the occurrences accordingly) of the decompositions such that the union of all node setsis of the form [ k ] \ { } for some k ∈ N and the node sets are disjunct. Add a new node 1 to theforest of tree decompositions and set the associated bag to be X := ∅ . Connect the new node1 to an arbitrary node of each tree decomposition. It is easy to check that we just constructeda small tree decomposition for G of width 1. Now modify this decomposition to a small treedecompostion of G as follows: Add the set S to all bags. Again, it is easy to check that weobtain a small tree decomposition of G . The same holds for the running time. The width ofthe constructed decomposition is bounded from above by the term 1 + | S | , as every bag of thedecomposition for G had at most 2 vertices before we added the vertices within S .We use this statement to modifiy the reduction in [15, Theorem 3, p. 7] from sat to Max-Cut to a reduction from sat to Adv
STD -Max-Cut . Note that – additionally to the polynomialrunning time in the input encoding of the reduction – we need to ensure that a sat instance with17 variables is reduced to a
Adv
STD -Max-Cut instance that has treewidth n + O (1) (otherwisewe do not get the result we need). Proof of Theorem 7.
First, let us literally restate the reduction of Lokshtanov, Marx, and Sau-rabh in [15]:Given an instance φ of SAT we [. . .] construct an instance G w of Weighted MaxCut as follows. [. . .]We start with making a vertex x . [. . .] We make a vertex b v i for each variable v i .For every clause C j we make a gadget as follows. We make a path b P j having 4 | C j | vertices. All the edges on b P j have weight 3 n [where n denotes the number of variables in φ ]. Now, we make the first and last vertex of b P j adjacent to x with an edge ofweight 3 n . Thus the path b P j plus the edges from the first and last vertex of b P j to x form an odd cycle of b C j . We will say that the first, third, fifth, etc, vertices areon odd positions on b P j while the remaining vertices are on even positions . For everyvariable v i that appears negatively in C j we select a vertex p at an even position (butnot the last vertex) on b P j and make ˆ v adjacent to p and p ’s successor on b P j withedges of weight 1. For every variable v i that appears negatively in C j we select avertex p at an odd position on b P j and make b v adjacent to p and p ’s succcessor on b P j with edges of weight 1. We make sure that each vertex on b P j receives an edge at mostonce in this process. There are more than enough vertices on b P j to accomodate allthe edges incident to vertices corresponding to variablnes in the clause C j . We createsuch a gadget for each clause and set [the target value γ = 1 + (12 n + 1) P mj =1 | C j | ][. . .]. This concludes the construction.Observe that following statements:1. The number of vertices after the reduction is O ( | φ | + n ) = O ( | φ | ).2. For every clause there is exactly one cycle created.3. Any pair of such cycles does share exactly one vertex: x .4. Apart from those partially overlapping cycles, there are only n additional vertices.5. Every vertex on those cycles that is not x has degree at most 3 as it is connected to atmost one b v i .From Property 4 and Property 5 we can now deduce that the number of edges after the reductionis O ( | φ | ). Thus the encoding of the graph has asymptotically the same size as the encoding of φ . The correctness of the reduction was shown in [15, Lemma 7 – Lemma 9, p. 9 – 10].Now observe that Property 3 and Property 4 imply that removing all variable vertices b v i and x does result in a set of paths. This is exactly the statement we need in order to be able to applyLemma 17 which completes the proof.The optimality of our algorithm in its dependence on n follows directly from the followingbound: Proposition 18 ([10, Thm. 6, p. 6], [8, Thm. 16, p. 8]) . For all ε > O ( n − ε )time algorithm for Adv
STD -Max-Bisection , even if the instances are restricted to be paths,unless MCH fails. 18igure 2: A 5 × Adv
STD -Max-Bisection that are not even required to include instances of treewidth1, which is the treewidth of the instances that are created in the reduction for Proposition 18. Inorder to show this, we want to transform those treewidth 1 instances to instances with greatertreewidth in adequate time without changing the number of vertices or significantly increasingthe instance’s encoding size. We are able to do so and additionally ensure that the resultinggraph is planar. In order to show this we need the following type of graphs:
Definition 19 (Grid) . For two integers l ∈ N , k ∈ N , the corresponding grid graph is givenby G l,k := ([ l ] × [ k ] , { ( i, j )( i , j ) : ( i, i ) ∈ [ l ] , ( j, j ) ∈ [ k ] , | i − i | + | j − j | = 1 } )Any graph that is isomorphic to G l,k is called a l × k grid .Even though the definition appears quite complicated, the structure of those graphs is quiteintuitive, see for example Figure 2.These graphs are important for us as they are obviously planar but also have a quite largetreewidth in relation to the number of vertices. Lemma 20 ([7, p. 359]) . Let k ∈ N > and G = ( V, E ) be a k × k grid. Then G hastreewidth k .It is quite easy to see that any graph that has a k × k grid as subgraph has treewidth lowerbounded by k : Observation 21.
Let k ∈ N > and G be a graph that has a subgraph that is a k × k grid.Then G has treewidth at least k .In fact one can show that any planar graph has a specific treewidth if and only if it has such agrid as minor. Yet we do not need this stronger statement in this paper.When proving the hardness of Adv
STD -Max-Cut , we need not only to ensure that the generatedinstances have the desired treewidth and are planar but also have to provide a tree decompositionwith minimal treewidth. As this computation is NP-hard in general, it is not a priori clear thatthis can be done in truly subquadratic time in the number of vertices for the generated instances.We show that this is in fact possible in time O ( | V | log | V | ) without increasing the instances toomuch. 19 roposition 22. Let J = ( V, E, w, γ, t, D ) be an instance for
Adv
STD -Max-Bisection where (
V, E ) is a path (implying t = 1). Then for any ˆ t ∈ hjp | V | ki there is an instanceˆ J = ( V, ˆ E, ˆ w, γ, ˆ t, ˆ D ) such that1. ( V, ˆ E ) is planar2. ˆ J can be computed in O ( | V | log | V | )3. | ˆ J | ∈ O ( | J | )4. J ∈ Adv
STD -Max-Bisection ⇐⇒ ˆ J ∈ Adv
STD -Max-Bisection
Proof.
Let I = ( V, E, w, γ, t, D ) be an instance to
Adv
STD -Max-Bisection . Assume withoutloss of generality that 1 < ˆ t as otherwise ˆ J = J would be sufficient.Without loss of generality assume that for any e ∈ (cid:0) V (cid:1) it holds that e ∈ E ⇐⇒ ∃ i ∈ V. e = i ( i + 1). If that would not be the case we could rename the instance in time O ( | V | log | V | ) usingfor example AVL-trees, or even in O ( | V | ) when choosing a slightly more sophisticated approach.As this is a reduction that does not have to be efficient but only truly subquadratic, we stickwith the simpler approach as it sufficient for our cause.Recall that V = [ | V | ] (Definition 12). Now observe that G (cid:22) [ˆ t ] is a path with ˆ t vertices. We canthen transform E into ˆ E such that for ˆ G = ( V, ˆ E ) we have that ˆ G (cid:22) [ˆ t ] is a ˆ t × ˆ t grid. Therefor weassume that whenever we add a new edge, the weight function is extended implicitly such thatevery new edge has weight 0. Algorithm 14
Extending the path G (cid:22) [ˆ t ] to a ˆ t × ˆ t grid ˆ E ← E for all ν ∈ [ˆ t − do for all ν ∈ [ˆ t − do ˆ E ← ˆ E ∪ { { ν ˆ t + 1 + ν , ( ν + 2)ˆ t − ν } } end for end for
19 210 311 412816 715 614 513
Figure 3: An illustration for the situation in Algorithm 14 when ˆ t = 4. The dashed edges arethose that are to be added.Now using Observation 21 we know that ˆ G has treewidth at least ˆ t . On the other hand one caneasily check that it does not have a greater treewidth as ˆ G is just a ˆ t × ˆ t grid with a single path20ttached. Following the same observation regarding the structure of ˆ G we directly get that ˆ G isplanar.The running time for the steps described until now is O ( | V | log | V | ) for renaming and O (ˆ t ) ⊆O ( | V | ) for adding the edges. Thus O ( | V | log | V | ) upper bounds the overall running time.We still need to compute a small tree decomposition of ˆ G . It is easy to see that the followinggives a tree-decomposition of a ˆ t × ˆ t grid with width ˆ t and ≤ ˆ t nodes: Algorithm 15
Compute a small tree decomposition of the ˆ t × ˆ t grid Start with the first row while The current row is not the last row do Let M be the set of vertices in the current row for all i ∈ [ˆ t ] do Create a new node in the decomposition Connect it to the one that was created directly before the new one, if there is one Let v be the i -th vertex in the current row Let w be the i -th vertex in the next row M ← M ] { w } Put M into the corresponding bag to the new node M ← M \ { v } end for Go to next row and proceed end while
The running time for this algorithm is bounded by the number of executions of the loops: O (ˆ t ).Thus this computation does not dominate the running time of the reduction, as O (ˆ t ) ⊆ O ( | V | )by choice of ˆ t . It is obvious that we can add the remaining part of the input graph, that is theedges that are not part of the just contructed grid, by simply creating one node per edge, intime O ( | V | ). The number of nodes in this decomposition is bounded by ˆ t + ( | V | + 1), thus thedecomposition is small.We added at most ˆ t edges, which is at most linear in the number of vertices. As the vertex setis encoded in space linear in the number of vertices this means that by adding edges the size ofthe encoding did at most grow in a linear fashion as desired. The same holds for the encodingsize of the tree decomposition as its size is roughly O (ˆ t + | V | ) ⊆ O ( | V | ) as there are ˆ t bags withˆ t + 1 elements and additional O ( | V | ) bags for the rest of the input path.Now observe that the size of any cut/bisection is exactly the same in ˆ G and G as they sharethe same vertex set and all new edges have weight 0. This shows the last claim and finishes theproof.Additionally to the reduction above we also need a proposition that basically states: if one has aclass of instances where t ∈ o(log n ), then Algorithm 8 solves the problem in truly subquadratictime, that is O ( n − ε ) for some ε > Proposition 23.
Let g ( n ) ∈ o(log n ). Then for any ε > δ > O (2 g ( n ) · n − ε ) ⊆ o( n − δ )21 roof. Fix g . Fix ε . Note that2 g ( n ) = (cid:16) n log n (2) (cid:17) g ( n ) = n g ( n ) · log n (2) = n g ( n ) · ld 2ld n = n g ( n ) · n = n g ( n )ld n Now let δ := ε/ g ( n ) · − ε = n g ( n )ld n · n − δ = n g ( n )ld n n δ · n − δ Note that for any f it holds that f ( n ) ∈ O (cid:16) g ( n ) · − ε (cid:17) ⇐⇒ f ( n ) ∈ O n g ( n )ld n n δ · n − δ ! ⇐⇒ lim sup n →∞ f ( n ) n g ( n )ld n n δ · n − δ < ∞ ⇐⇒ lim sup n →∞ n δ · f ( n ) n g ( n )ld n · n − δ < ∞ Note that lim n →∞ g ( n )ld n = 0 by assumption as ld n ∈ Θ(log n ) and thus lim n →∞ n δ n g ( n )ld n =lim n →∞ n δ = ∞ . We can hence deduce that the limit superior above can only be finite iflim sup n →∞ f ( n ) n − δ = 0.Also note that we only consider non-negative functions and hece the limit inferior is always lowerbounded by zero, thus the limit inferior and superior are equal and we have that the whole limitconverges to 0: = ⇒ lim sup n →∞ f ( n ) n − δ = 0 ⇐⇒ lim n →∞ f ( n ) n − δ = 0 ⇐⇒ f ( n ) ∈ o (cid:0) n − δ (cid:1) With the two propositions we just proofed we can now focus on the hardness theorem itself:
Theorem 8.
Let f : N → N be a function such that f ( n ) ∈ o(log n ). Let L ⊆
Adv
STD -Max-Bisection be a set that is build by restricting
Adv
STD -Max-Bisection to instanceswhere the corresponding graph is planar and has treewidth t = f ( n ) where n is the numberof vertices of the instance. Then for all ε > O (2 f ( n ) · n − ε ) algorithm to decide L unless MCH fails. Proof.
Fix f . Note that f ( n ) ∈ o(log n ) ⇐⇒ ∀ c > . ∃ n . ∀ n > n . f ( n ) ≤ c log n Now let that c := 1 and obtain n . Apply Proposition 23 with g := f . Obtain δ . Note that O (2 f ( n ) n − ε ) ⊆ o( n − δ ) ⊆ O ( n − δ )This means that if we are able to decide L in the stated running time, we decide L in timesubquadratic in n .If we are now able to do a subquadratic reduction from MaxConv to L , we eventually show theclaim as by MCH this problem cannot be solved in subquadratic time.Consider an instance to MaxConv . Use the reduction of Proposition 18 to transform this intoan instance for weighted maximal bisection on paths. Let n be the number of vertices of this22nstance. If n < n , we extend the path to a path with n + 1 nodes by adding new nodes andnew edges with weight 0. Since n is constant, we know that the new number of nodes, n , isasymptotically equal to n , that is n ∈ Θ( n ). Now observe that t = f ( n ) ≤ log n ≤ √ n This allows us to apply Proposition 22. Eventually we get a truly subquadratic algorithm for
MaxConv which does only exist if MCH fails.
In this paper we show that there is an algorithm to decide
Adv
STD -Max-Bisection in time O (2 t n ), which is optimal assuming MCH and SETH. We proofed the latter. Moreover, thisalgorithm is (obviously) also usable to compute the value of a maximal bisection. We gave usefuldetails for implementing the algorithm but also understanding the requirements of it on theunderlying machine model. The strong dependency on the machine model raises the question ifan algorithm with the same running time is also possible on weaker machine models, especiallythose, where the size of a memory cell is not unbounded. We showed the hardness of a largefamily of graphs that are build by their dependency between the number of nodes and the treewidth. This raises the question wether better algorithms are possible if t ∈ ω (log n )) or if eveninstances with this property remain hard. 23 eferences [1] Stefan Arnborg, Derek G. Corneil, and Andrzej Proskurowski. “Complexity of FindingEmbeddings in a k-Tree”. In: SIAM Journal on Algebraic Discrete Methods doi : . eprint: https://doi.org/10.1137/0608024 . url : https://doi.org/10.1137/0608024 .[2] Thang Nguyen Bui et al. “Graph Bisection Algorithms with Good Average Case Behav-ior”. In: Combinatorica issn : 0209-9683. doi :
10 . 1007 /BF02579448 . url : https://doi.org/10.1007/BF02579448 .[3] Marek Cygan et al. “On Problems Equivalent to (Min,+)-Convolution”. In: ACM Trans.Algorithms issn : 1549-6325. doi : .[4] Marek Cygan et al. “On problems equivalent to (min,+)-convolution”. In: (2017). eprint: . url : http://arxiv.org/abs/1702.07669 .[5] Josep D´ıaz and Marcin Kami´nski. “MAX-CUT and MAX-BISECTION are NP-hard onunit disk graphs”. In: Theoretical Computer Science issn : 0304-3975. doi : https : / / doi . org / 10 . 1016 / j . tcs . 2007 . 02 . 013 . url : .[6] Josep D´ıaz and George B. Mertzios. “Minimum bisection is NP-hard on unit disk graphs”.In: Information and Computation
256 (2017), pp. 83–92. issn : 0890-5401. doi : https :/ / doi . org / 10 . 1016 / j . ic . 2017 . 04 . 010 . url : .[7] Reinhard Diestel. “Graph Minors”. In: Graph Theory . Berlin, Heidelberg: Springer BerlinHeidelberg, 2017, pp. 347–391. isbn : 978-3-662-53622-3. doi : . url : https://doi.org/10.1007/978-3-662-53622-3_12 .[8] Eduard Eiben, Daniel Lokshtanov, and Amer E. Mouawad. “Bisection of BoundedTreewidth Graphs by Convolutions”. In: . Ed. by Michael A. Bender, Ola Svensson, and Grzegorz Herman. Vol. 144.Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: SchlossDagstuhl–Leibniz-Zentrum fuer Informatik, 2019, 42:1–42:11. isbn : 978-3-95977-124-5. doi : . url : http://drops.dagstuhl.de/opus/volltexte/2019/11163 .[9] F. Hadlock. “Finding a Maximum Cut of a Planar Graph in Polynomial Time”. In: SIAMJournal on Computing doi : . eprint: https://doi.org/10.1137/0204019 . url : https://doi.org/10.1137/0204019 .[10] Tesshu Hanaka, Yasuaki Kobayashi, and Taiga Sone. “A (probably) optimal algorithmfor Bisection on bounded-treewidth graphs”. In: (2020). eprint: . url : http://arxiv.org/abs/2002.12706 .[11] Klaus Jansen et al. “Polynomial Time Approximation Schemes for MAX-BISECTIONon Planar and Geometric Graphs”. In: SIAM J. Comput. doi : . url : https://doi.org/10.1137/S009753970139567X .[12] Richard M. Karp. “Reducibility among Combinatorial Problems”. In: Complexity of Com-puter Computations: Proceedings of a symposium on the Complexity of Computer Compu-tations . Ed. by Raymond E. Miller, James W. Thatcher, and Jean D. Bohlinger. Boston,MA: Springer US, 1972, pp. 85–103. isbn : 978-1-4684-2001-2. doi : . url : https://doi.org/10.1007/978-1-4684-2001-2_9 .[13] T. Kloks. “Treewidth, Computations and Approximations”. In: Lecture Notes in ComputerScience . 1994. 2414] Ton Kloks, ed.
Treewidth, Computations and Approximations . en. Vol. 842. Lecture Notesin Computer Science. Berlin/Heidelberg: Springer-Verlag, 1994. isbn : 9783540583561. doi : . url : http://link.springer.com/10.1007/BFb0045375 .[15] Daniel Lokshtanov, D´aniel Marx, and Saket Saurabh. “Known Algorithms on Graphs ofBounded Treewidth are Probably Optimal”. In: (2010). eprint: . url : http://arxiv.org/abs/1007.5450 .[16] Mechthild Stoer and Frank Wagner. “A Simple Min-Cut Algorithm”. In: J. ACM issn : 0004-5411. doi : . url : https://doi.org/10.1145/263867.263872 .[17] Wei-Kuan Shih, Sun Wu, and Y. S. Kuo. “Unifying maximum cut and minimum cut ofa planar graph”. In: IEEE Transactions on Computers doi :10.1109/12.53581