An Effective Upperbound on Treewidth Using Partial Fill-in of Separators
aa r X i v : . [ c s . D M ] S e p An Effective Upperbound on Treewidth UsingPartial Fill-in of Separators
Boi Faltings ∗ Martin Charles Golumbic ‡ September 9, 2019
Abstract
Partitioning a graph using graph separators, and particularly cliqueseparators, are well-known techniques to decompose a graph into smallerunits which can be treated independently. It was previously knownthat the treewidth was bounded above by the sum of the size of theseparator plus the treewidth of disjoint components, and this was ob-tained by the heuristic of filling in all edges of the separator makingit into a clique.In this paper, we present a new, tighter upper bound on thetreewidth of a graph obtained by only partially filling in the edgesof a separator. In particular, the method completes just those pairsof separator vertices that are adjacent to a common component, andindicates a more effective heuristic than filling in the entire separator.We discuss the relevance of this result for combinatorial algorithmsand give an example of how the tighter bound can be exploited in thedomain of constraint satisfaction problems.
Keywords : treewidth, partial k -trees, graph separators ∗ Artificial Intelligence Laboratory (LIA), Ecole Polytechnique F´ed´erale de Lausanne(EPFL), 1015 Lausanne, Switzerland. E-mail: boi.faltings@epfl.ch ‡ Caesarea Rothschild Institute and Department of Computer Science, University ofHaifa, Mt. Carmel, Haifa 31905, Israel. E-mail: [email protected] Introduction
Let G = ( V, E ) be an undirected graph. We denote by G X = ( X, E X ) thesubgraph of G induced by X ⊆ V where E X = { ( u, v ) ∈ E | u, v ∈ X } .A tree decomposition for a graph G is defined as a tree T whose nodes arelabelled by subsets of V called “clusters” (or “bags”) such that(1) every vertex v ∈ V appears in at least one cluster,(2) if ( u, v ) ∈ E , then u and v co-occur in some cluster, and(3) for every v ∈ V , the set of nodes of T which include v in their clusterinduces a connected subgraph (i.e., a subtree) of T , denoted T ( v ).The width of a tree decomposition T is the size of the largest cluster minus1, and is denoted by width ( T ).A given graph G may have many possible tree decompositions, includ-ing the trival representation as a single node with cluster equal to V . The treewidth tw ( G ) of a graph G is defined to be the minimum width over alltree decompositions for G . Such a tree decomposition is called a minimumtree decomposition for G . Remark 1
The treewidth of a tree equals 1, of a chordless cycle equals 2, of aclique on k vertices equals k − , and of a stable (independent) set equals zero.It is also well known, that a chordal graph has a minimum tree decompositionwhere each cluster is a maximal clique of the graph, thus, the treewidth of achordal graph is the size of its largest clique minus 1. The theory of treewidth, introduced by Robertson and Seymour [10], isa very rich topic in discrete mathematics, and has important algorithmicsignificance, since many NP-complete problems may be solved efficiently ongraphs with bounded treewidth. The reader is referred to [1, 2, 8] for furthertreatment of the subject.Partitioning a graph using graph separators, and particularly clique sep-arators, is well-known as a method to decompose a graph into smaller compo-nents which can be treated independently [4]. A previously known bound ([6])for the treewidth of a graph G = ( V, E ) was based on identifying a separator S ⊂ V such that the graph G V \ S obtained by deleting from G all vertices in S and their incident edges is broken into components G , ..., G k : tw ( G ) ≤ | S | + max i [ tw ( G i )]2his was obtained by the heuristic of filling in all edges of the separatormaking it into a clique, so we call it the separator-as-clique bound . It is im-portant not only for estimating the treewidth of a graph, but decompositionsthat result in a low bound on treewidth give rise to efficient algorithms for avariety of problems on graphs.In Section 2, we present a new, tighter upper bound on the treewidth ofa graph whose novelty is filling in fewer edges of the separator. Our methodcompletes just those pairs of separator vertices that are adjacent to a commoncomponent, giving a lower treewidth of the augmented supergraph. We thuscall it the separator-as-components bound . This is followed by an example inSection 3 to illustrate our method. In Section 4, we conclude by discussingits application to solving constraint satisfaction problems combining searchwith dynamic programming, which was our motivation for having studiedthe question of improving the bounds on treewidth. We first recall the Helly property which is satisfied by subtrees of a tree [7].By definition, if ( u, v ) ∈ E then T ( u ) ∩ T ( v ) = ∅ . The Helly property fortrees states that if a collection of subtrees of a tree pairwise intersect, then theintersection of the entire collection is nonempty. This immediately impliesthe following well-known (folklore) observation [5], which will be used below.
Lemma 1
Let T be a tree decomposition for G . If C is a clique of G , thenthere is a cluster X (labelling a node of T ) such that C ⊆ X . Let G = ( V, E ) be an undirected graph and let S ⊆ V be a subset of thevertices. We consider the connected components G , . . . , G t of G V \ S , i.e., theconnected subgraphs obtained from G by deleting all vertices of S and theirincident edges. We denote by V i the vertices of G i , that is, G i = ( V i , E V i ).Finally, let S i ⊆ S denote the subset consisting of all vertices of S whichhave neighbors in G i .Define ( x, y ) to be a fill-in edge if ( x, y ) / ∈ E and x, y ∈ S i for some i ,and let F be the set of all fill-in edges. Define the graph H = ( V, E ′ ) to bethe supergraph of G , where E ′ = E ∪ F . In other words, an edge is filledin between u, v ∈ S in E ′ if there is a path in G from u to v using onlyintermediate vertices of some component G i . Thus, each S i becomes a cliquein H S , the subgraph of H induced by S .3he following is our new result: Theorem 1 tw ( G ) ≤ max i { tw ( H S ) , | S i | + { tw ( G i ) }} Proof.
Let T S be a minimum tree decomposition for the subgraph H S ,and let T i be a minimum tree decomposition for G i . We will now constructa tree decomposition T for G .Since the set S i forms a clique in H S , by Remark 1, there is a cluster X i in T S containing S i . To form T , we augment the union of T S and all the T i by (i) adding the members of S i to each cluster of T i , and (ii) adding a newtree edge from the node x i with label X i to an arbitrary node v i of T i , foreach i .We now show that T is a tree decomposition for H and thus also for G .Condition (1) of the definition of tree decomposition is trivial, and condition(3) is proven as follows: Each T ( v ) for v ∈ V \ S remains unchanged and istherefore a subtree of T . Also, each T ( x ) for x ∈ S is a subtree of T sinceit consists of the union of its former subtree T S ( x ) and, for each i in which x has neighbors in G i , the entire tree T i along with the new edge ( v i , x i )connecting G i with the node with label X i .We prove condition (2) in three cases. Case 1 : u, v ∈ V \ S : If ( u, v ) ∈ E , then u and v are in the sameconnected component, say G j , and they appear together in some cluster ata node of T j . Case 2 : u ∈ V \ S and v ∈ S : If ( u, v ) ∈ E where u is in the component G j , then v ∈ S j and they now appear together in some (in fact, in every)cluster of T j where u appears. Case 3 : u, v ∈ S : If ( u, v ) ∈ E , then ( u, v ) ∈ E ′ S , so u and v co-occur insome cluster at a node of T S , hence in T .Thus, T is a tree decomposition for H and thus also for G .It now remains to show that w = width ( T ) is at most max { tw ( H S ) , | S i | + tw ( G i ) | i = 1 , . . . , t } .We first observe that tw ( G V \ S ) = max { tw ( G i ) | i = 1 , . . . , t } , since the G i are disjoint graphs.Let Y be the largest cluster in T , that is, w = | Y | −
1. If Y is the labelof a node in T S , then w = tw ( H S ). Otherwise, Y is the new label of a nodein T j for some component G j , that is, Y = S j ∪ B where B is the largest(original) cluster in T j , and tw ( G j ) = | B | −
1. Therefore, w = | Y | − | S j | + | B | − | S j | + tw ( G j )4 ✙✛✘✚✙✛✘✚✙✛✘✚✙✛✘✚✙✛✘ ✚✙✛✘✚✙✛✘✚✙✛✘ ✚✙✛✘✚✙✛✘ v v v v v v v v v v Figure 1: Example graphwhich proves the claim. Q.E.D.
Corollary 1 tw ( G ) ≤ tw ( H S ) + tw ( G V \ S ) + 1 Proof.
This follows since | S i | ≤ | X i | ≤ tw ( H S ) + 1 for all i and tw ( G V \ S ) = max i { tw ( G i ) } . Remark 2
Our result can be seen as a strengthening of the notion of safeseparators [3] and of w-cliques [6] where these authors fill-in all pairs ofvertices in S making it a clique, and giving the weaker upperbound tw ( G ) ≤| S | + max i { tw ( G i ) } = | S | + tw ( G V \ S ) . Consider the example graph shown in Figure 1. It has a tree decompositioninto the following cliques: C = { v , v , v } = { v , v , v } C = { v , v , v } C = { v , v , v } C = { v , v , v } that are all of size 3, and the subgraph with vertices { v , v , v } has no treedecomposition into smaller cliques. Thus, its treewidth is 2. (In fact, it achordal graph, and C – C a clique decomposition.)To illustrate our method, and provide an example for the application de-scribed in Section 4, choose S = { v , v , v , v } , thus leaving three connectedcomponents G = { v , v } , G = { v , v } and G = { v , v } . Each of thesehas a treewidth of 1.Using the separator-as-clique bound, we upper-bound the treewidth of Gas: tw ( G ) ≤ | S | + max i { tw ( G i ) } = 4 + 1 = 5Using Theorem 2, the separator-as-components method obtains a tighterbound, as follows. Note that we have S = { v } , S = { v } and S = { v } ,that H S = G S since none of the G i is connected via multiple vertices, andthat tw ( H S ) = 2. Now we have: tw ( G ) ≤ max i { tw ( H S ) , | S i | + tw ( G i ) } = max i { , } = 2which is exactly the treewidth of G .To be fair, we should note that for the separator-as-clique bound, thebest possible choice for S would have been S ′ = { v , v , v } , thus leaving anadditional disjoint component G = { v } and giving a bound of | S ′ | + 1 = 4instead of 5. Using our separator-as-component method, this separator wouldnot give a bound that is as good because the S of neighbours of the newcomponent G includes all 3 vertices in S ′ , thus | S | + tw ( G ) = 3 + 0 = 3,and the bound will be 3. While this is still better than the separator-as-cliquebound, it is a counterintuitive indication that the smaller separator S ′ doesnot give the best decomposition. This fact is important in the applicationexample below. 6 Application to Constraint Satisfaction Prob-lems
Although this paper may be regarded as purely mathematical, it has its mo-tivation in an important heuristic method for solving various problems thatare commonly solved using search algorithms, including constraint satisfac-tion ([9]), satisfiability and Bayesian inference ([6]).In search algorithms, there is a tradeoff between (1) the time complex-ity of searching for a solution, (2) the size of the memory (or cache) tostore intermediate computations, and (3) for distributed implementations,the communication complexity for sending and sharing information betweenparts of the graph. Balancing these three parameters within the resourcesavailable is the basis of our motivation.As an example, consider a constraint satisfaction problem (CSP) [11]),where each of a set of variables X = { x , ..., x n } has to be assigned a valuein the corresponding domains D = { d , ..., d n } such that for each of a set ofconstraints C = { c , ..., c m } , the values assigned to the variables in the scopeof the constraint are among its allowed tuples. When all constraints have ascope of one or two variables, the CSP can be represented as a graph whosevertices are the variables and whose edges are the constraints.Constraint satisfaction problems are commonly solved using backtracksearch algorithms that assign values to the variables one at a time and back-track as soon as no value consistent with previous assignments can be found.These have a complexity of O ( | d | n ).However, the efficiency of search can often be significantly improved usingcaching of partial results. In particular, when the constraint graph has asmall separator S that splits the graph into at least two components, onecan record all value combinations of variables in the separator that admita consistent assignment to the variables in one of the components and thenreplace backtrack search through these variables by table lookup for the restof the graph. Equivalently, one can use dynamic programming to determinewhich combinations of values for variables in the separator yield a consistentassignment. It has been shown [6] that the time complexity of such analgorithm is at least O ( | d | | S | + max i [ tw ( G i )] ) and can be much less than O ( | d | n ).Let the example graph of Figure 1 represent a CSP with 10 variables with d possible values each, where the arcs correspond to arbitrary unstructuredconstraints. Backtrack search would require time on the order of O ( d ), but7emory only linear in d . However, search with caching or dynamic program-ming can solve this problem in cubic time and quadratic space in d , usingthe separator S ′ = { v , v , v } . It would search through all combinations ofvalues for v , v and v (time complexity ( O ( | d | | S | ) and for each of them de-termine if all of the remaining components G , G , G and G can be assigneda consistent value combination. For component G , this search takes O ( | d | )time but as the result only depends on the value of v , it can be cached sothe total time complexity is O ( | d | ) and space complexity O ( | d | ). The samesituation holds for components G and G . For G , however, all combina-tions of v , v , v and v have to be considered, and the time complexity is O ( | d | ). Thus, the total time complexity is O ( | d | ) time and O ( | d | ) memory.Intuitively, since the complexity of the best known algorithms for solvingCSP depends exponentially on the treewidth, a decomposition for which asmaller bound on the treewidth of the original graph can be proven has thepotential to better preserve the minimal complexity of the original graph.Thus, it would have been better to use the decomposition pointed to by ourTheorem.We would pick the larger S = { v , v , v , v } since it allows to show abound of tw ( G ) ≤ S for solving the problem,rather than searching over all combinations of values for variables in S , S would be decomposed again into S = { v } and S = { v , v , v } , where tw ( S ) = 1.This shows how to solve the entire CSP in cubic time and linear space inthe following steps:1. first decomposition: remove S and collapse the remaining graph intovertices of S :(a) for all values of v , test whether they admit a consistent combi-nation of v and v (time complexity O ( | d | ), space complexity O ( | d | ).(b) do the same for v and v , v .(c) do the same for v and v , v .2. second decomposition: remove S and use search through all values of v to:(a) determine and store values of v such that there is a value of v that8s consistent with it and the current value of v (time complexity O ( | d | ), space complexity O ( | d | )).(b) do the same for v and v .(c) intersect the two caches for v and determine if any of the remain-ing values is consistent with the current value of v ; if yes, expandinto a solution as below.3. Select a consistent value for v from its respective cache, and do thesame for v .4. Use search to find combinations for v and v consistent with v (timecomplexity O ( | d | ), space complexity O (1)) and do the same for v , v , v and v , v , v .The reader may verify that this algorithm requires only linear space and cubictime in the domain size d , and is thus much better than the decompositionpointed to by earlier results.The practical lesson afforded by our Theorem is that good heuristicsfor decomposing constraint satisfaction problems would look not for smallseparators as current wisdom dictates, but for separators that have few con-nections with each remaining component, and then apply decompositionsrecursively.As illustrated by this example, we thus believe that Theorem 2 can pro-vide a useful supplementary heuristic for decomposing and solving combina-torial problems using graph separators, contributing to the growing literaturesurveyed in [4]. Acknowledgements.
The authors thank Hans Bodlaender for recom-mending that we highlight the stronger statement of the Theorem as wellas the Corollary. This work was carried out when the second author was avisiting professor at the Artificial Intelligence Laboratory (LIA), Ecole Poly-technique F´ed´erale de Lausanne (EPFL).
References [1] Hans L. Bodlaender, A tourist guide through treewidth.
Acta Cybernet-ica
11 (1993), 1–21. 92] Hans L. Bodlaender, Treewidth: Characterizations, applications, andcomputations.
Lecture Notes in Computer Science
LNCS 4271 (2006),1–14.[3] Hans L. Bodlaender and Arie M.C.A. Koster, Safe separators fortreewidth.
Discrete Mathematics
306 (2006), 337–350.[4] Hans L. Bodlaender and Arie M.C.A. Koster, Treewidth computationsI: Upper Bounds.
Information and Computation (2009), in press.[5] Hans L. Bodlaender and Rolf H. M¨ohring, The pathwidth and treewidthof cographs.
SIAM J. Disc. Meth.
Proceedings of the 20th Conference on Uncertainty in Artificial In-telligence (UAI) , Morgan Kaufmann, 2004, pp. 43–50.[7] Martin Charles Golumbic,
Algorithmic graph theory and perfect graphs .Academic Press, New York, 1980. Second edition,
Annals of DiscreteMathematics , Elsevier, Amsterdam, 2004.[8] Ton Kloks, Treewidth: Computations and approximations. LectureNotes in Computer Science
LNCS 842 (1994), 1–209.[9] Adrian Petcu and Boi Faltings, MB-DPOP: A new memory-boundedalgorithm for distributed optimization.
Proc. 20th International JointConference on Artificial Intelligence (IJCAI-07) , Hyderabad, India, Jan,2007, pp. 1452–1457.[10] Neil Robertson and Paul D. Seymour, Graph minors. II. Algorithmicaspects of tree-width.