Fast Dynamic Programming on Graph Decompositions
Johan M. M. van Rooij, Hans L. Bodlaender, Erik Jan van Leeuwen, Peter Rossmanith, Martin Vatshelle
aa r X i v : . [ c s . D S ] J un Fast Dynamic Programming on Graph Decompositions ∗ Johan M. M. van Rooij † Hans L. Bodlaender † Erik Jan van Leeuwen ‡ Peter Rossmanith § Martin Vatshelle ‡ June 6, 2018
Abstract
In this paper, we consider three types of graph decompositions, namely tree decompo-sitions, branch decompositions, and clique decompositions. We improve the running timeof dynamic programming algorithms on these graph decompositions for a large number ofproblems as a function of the treewidth, branchwidth, or cliquewidth, respectively. Suchalgorithms have many practical and theoretical applications.On tree decompositions of width k , we improve the running time for Dominating Set to O ∗ (3 k ). Hereafter, we give a generalisation of this result to [ ρ, σ ]-domination problems withfinite or cofinite ρ and σ . For these problems, we give O ∗ ( s k )-time algorithms. Here, s is thenumber of ‘states’ a vertex can have in a standard dynamic programming algorithm for such aproblems. Furthermore, we give an O ∗ (2 k )-time algorithm for counting the number of perfectmatchings in a graph, and generalise this result to O ∗ (2 k )-time algorithms for many cliquecovering, packing, and partitioning problems. On branch decompositions of width k , we givean O ∗ (3 ω k )-time algorithm for Dominating Set , an O ∗ (2 ω k )-time algorithm for countingthe number of perfect matchings, and O ∗ ( s ω k )-time algorithms for [ ρ, σ ]-domination problemsinvolving s states with finite or cofinite ρ and σ . Finally, on clique decompositions of width k ,we give O ∗ (4 k )-time algorithms for Dominating Set , Independent Dominating Set , and
Total Dominating Set .The main techniques used in this paper are a generalisation of fast subset convolution,as introduced by Bj¨orklund et al., now applied in the setting of graph decompositions andaugmented such that multiple states and multiple ranks can be used. In the case of branchdecompositions, we combine this approach with fast matrix multiplication, as suggested byDorn. Recently, Lokshtanov et al. have shown that some of the algorithms obtained in thispaper have running times in which the base in the exponents is optimal, unless the StrongExponential-Time Hypothesis fails.
Width parameters of graphs and their related graph decompositions are important conceptsin the theory of graph algorithms. Many investigations show that problems that are
N P -hard on general graphs become polynomial or even linear-time solvable when restricted to theclass of graphs in which a given width parameter is bounded. However, the constant factorsinvolved in the upper bound on the running times of such algorithms are often large and ∗ Preliminary parts of this paper have appeared under the title ‘Dynamic Programming on Tree DecompositionsUsing Generalised Fast Subset Convolution’ on the 17th Annual European Symposium on Algorithms (ESA 2009),Lecture Notes in Computer Science 5757, pages 566-577, and under the title ‘Faster Algorithms on Branch andClique Decompositions’ on the 35th International Symposium Mathematical Foundations of Computer Science(MFCS 2010), Lecture Notes in Computer Science 6281, pages 174-185. † Department of Information and Computing Sciences, Utrecht University, P. O. Box 80.089, NL-3508 TB Utrecht,The Netherlands, [email protected], [email protected] ‡ Department of Informatics, University of Bergen, P. O. Box 7803, NO-5020 Bergen, Norway,
[email protected], [email protected] § Department of Computer Science, RWTH Aachen University, DE-52056 Aachen, Germany, [email protected] epend on the parameter. Therefore, it is often useful to find algorithms where these factorsgrow as slow as possible as a function of the width parameter k .In this paper, we consider such algorithms involving three prominent graph-width param-eters and their related decompositions: treewidth and tree decompositions , branchwidth and branch decompositions , and cliquewidth and k -expressions or clique decompositions . Thesethree graph-width parameters are probably the most commonly used ones in the literature.However, other parameters such as rankwidth [58] or booleanwidth [18] and their relateddecompositions also exist.Most algorithms solving combinatorial problems using a graph-width parameter consist oftwo steps:1. Find a graph decomposition of the input graph of small width.2. Solve the problem by dynamic programming on this graph decomposition.In this paper, we will focus on the second of these steps and improve the running time ofmany known algorithms on all three discussed types of graph decompositions as a function ofthe width parameter. The results have both theoretical and practical applications, some ofwhich we survey below.We obtain our results by using variants of the covering product and the fast subset con-volution algorithm [6] in conjunction with known techniques on these graph decompositions.These two algorithms have been used to speed up other dynamic programming algorithmsbefore, but not in the setting of graph decompositions. Examples include algorithms for Steiner Tree [6, 57], graph motif problems [4], and graph recolouring problems [59]. Animportant aspect of our results is an implicit generalisation of the fast subset convolution al-gorithm that is able to use multiple states. This contrasts to the set formulation in which thecovering product and subset convolution are defined: this formulation is equivalent to usingtwo states (in and out). Moreover, the fast subset convolution algorithm uses ranked M¨obiustransforms, while we obtain our results by using transformations that use multiple states andmultiple ranks. It is interesting to note that the state-based convolution technique that weuse reminds of the technique used in Strassen’s algorithm for fast matrix multiplication [65].Some of our algorithms also use fast matrix multiplication to speed up dynamic program-ming as introduced by Dorn [32]. To make this work efficiently, we introduce the use ofasymmetric vertex states. We note that matrix multiplication has been used for quite sometime as a basic tool for solving combinatorial problems. It has been used for instance inthe
All Pairs Shortest Paths problem [63], in recognising triangle-free graphs [49], andin computing graph determinants. One of the results of this paper is that (generalisationsof) fast subset convolution and fast matrix multiplication can be combined to obtain fasterdynamic programming algorithms for many optimisation problems.
Treewidth-Based Algorithms.
Tree-decomposition-based algorithms can be used toeffectively solve combinatorial problems on graphs of small treewidth both in theory andin practice. Practical algorithms exist for problems like partial constraint satisfaction [54].Furthermore, tree-decomposition-based algorithms are used as subroutines in many areas suchas approximation algorithms [31, 37], parameterised algorithms [28, 30, 56, 68], exponential-time algorithms [39, 62, 70], and subexponential-time algorithms [17, 28, 40].Many
N P -hard problems can be solved in polynomial time on a graphs whose treewidthis bounded by a constant. If we assume that a graph G is given with a tree decomposition T of G of width k , then the running time of such an algorithm is typically polynomial in the sizeof G , but exponential in the treewidth k . Examples of such algorithms include many kinds ofvertex partitioning problems (including the [ ρ, σ ]-domination problems) [67], edge colouringproblems such as Chromatic Index [8], or other problems such as
Steiner Tree [53].Concerning the first step of the general two-step approach above, we note that findinga tree decomposition of minimum width is
N P -hard [3]. For fixed k , one can find a treedecomposition of width at most k in linear time, if such a decomposition exists [9]. However,the constant factor involved in the running time of this algorithm is very high. On theother hand, tree decompositions of small width can be obtained efficiently for special graphclasses [10], and there are also several good heuristics that often work well in practice [13].Concerning the second step of this two-step approach, there are several recent resultsabout the running time of algorithms on tree decompositions, with special considerations for he running time as a function of the width of the tree decomposition k . For several vertexpartitioning problems, Telle and Proskurowski showed that there are algorithms that, given agraph with a tree decomposition of width k , solve these problems in O ( c k n ) time [67], where c is a constant that depends only on the problem at hand. For Dominating Set , Alber andNiedermeier gave an improved algorithm that runs in O (4 k n ) time [2]. Similar results aregiven in [1] for related problems: Independent Dominating Set , Total Dominating Set , Perfect Dominating Set , Perfect Code , Total Perfect Dominating Set , Red-BlueDominating Set , and weighted versions of these problems.If the input graph is planar, then other improvements are possible. Dorn showed that
Dominating Set on planar graphs given with a tree decomposition of width k can be solvedin O ∗ (3 k ) time [34]; he also gave similar improvements for other problems. We obtain thesame result without requiring planarity.In this paper, we show that the number of dominating sets of each given size in a graphcan be counted in O ∗ (3 k ) time. After some modifications, this leads to an O ∗ ( nk k )-timealgorithm for Dominating Set . We also show that one can count the number of perfectmatchings in a graph in O ∗ (2 k ) time, and we generalise these results to the [ ρ, σ ]-dominationproblems, as defined in [67].For these [ ρ, σ ]-domination problems, we show that they can be solved in O ∗ ( s k ) time,where s is the natural number of states required to represent partial solutions. Here, ρ and σ are subsets of the natural numbers, and each choice of these subsets defines a differentcombinatorial problem. The only restriction that we impose on these problems is that werequire both ρ and σ to be either finite or cofinite. That such an assumption is necessaryfollows from Chappelle’s recent result [20]: he shows that [ ρ, σ ]-domination problems are W [1]-hard when parameterised by the treewidth of the graph if σ is allowed to have arbitrarilylarge gaps between consecutive elements and ρ is cofinite. The problems to which our resultsapply include Strong Stable Set , Independent Dominating Set , Total DominatingSet , Total Perfect Dominating Set , Perfect Code , Induced p -Regular Subgraph ,and many others. Our results also extend to other similar problems such as Red-BlueDominating Set and
Partition Into Two Total Dominating Sets .Finally, we define families of problems that we call γ -clique covering, γ -clique packing,or γ -clique partitioning problems: these families generalise standard problems like MinimumClique Partition in the same way as the [ ρ, σ ]-domination problems generalise
DominationSet . The resulting families of problems include
Maximum Triangle Packing , PartitionInto l -Cliques for fixed l , the problem to determine the minimum number of odd-size cliquesrequired to cover G , and many others. For these γ -clique covering, packing, or partitioningproblems, we give O ∗ (2 k )-time algorithms, improving the straightforward O ∗ (3 k )-time algo-rithms for these problems. Branchwidth-Based Algorithms.
Branch decompositions are closely related to treedecompositions. Like tree decompositions, branch decompositions have shown to be an effec-tive tool for solving many combinatorial problems with both theoretical and practical appli-cations. They are used extensively in designing algorithms for planar graphs and for graphsexcluding a fixed minor. In particular, most of the recent results aimed at obtaining faster ex-act or parameterised algorithms on these graphs rely on branch decompositions [32, 35, 40, 41].Practical algorithms using branch decompositions include those for ring routing problems [21],and tour merging for the
Travelling Salesman Problem [22].Concerning the first step of the general two-step approach, we note that finding the branch-width of a graph is
N P -hard in general [64]. For fixed k , one can find a branch decompositionof width k in linear time, if such a decomposition exists, by combining the results from [9]and [15]. This is similar to tree decompositions, and the constant factors involved in the run-ning time of this algorithms are very large. However, in contrast to tree decompositions forwhich the complexity on planar graphs is unknown, there exists a polynomial-time algorithmthat computes a branch decomposition of minimal width of a planar graph [64]. For generalgraphs several useful heuristics exist [21, 22, 46].Concerning the second step of the general two-step approach, Dorn has shown how to usefast matrix multiplication to speed up dynamic programming algorithms on branch decom-positions [32]. Among others, he gave an O ∗ (4 k )-time algorithm for the Dominating Set problem. On planar graphs, faster algorithms exist using so-called sphere-cut branch decom- ositions [36]. On these graphs, Dominating Set can be solved in O ∗ (3 ω k ) time, where ω isthe smallest constant such that two n × n matrices can be multiplied in O ( n ω ) time. Some ofthese results can be generalised to graphs that avoid a minor [35]. We obtain the same resultswithout imposing restrictions on the class of graphs to which our algorithms can be applied.In this paper, we show that one can count the number of dominating sets of each givensize in a graph in O ∗ (3 ω k ) time. We also show that one can count the number of perfectmatchings in a graph in O ∗ (2 ω k ) time, and we show that the [ ρ, σ ]-domination problems withfinite or cofinite ρ and σ can be solved in O ∗ ( s ω k ) time, where s is again the natural numberof states required to represent partial solutions. Cliquewidth Based Algorithms.
The notion of cliquewidth was first studied by Cour-celle et al. [25]. The graph decomposition associated with cliquewidth is a k -expression, whichis sometimes also called a clique decomposition. Similar to treewidth and branchwidth, manywell-known problems can be solved in polynomial time on graphs which cliquewidth is boundedby a constant [26].Whereas the treewidth and branchwidth of any graph are closely related, its cliquewidthcan be very different from both of them. For example, the treewidth of the complete graphon n vertices is equal to n −
1, while its cliquewidth is equal to 2. However, the cliquewidthof a graph is always bounded by a function of its treewidth [27]. This makes cliquewidth aninteresting graph parameter to consider on graphs where the treewidth or branchwidth is toohigh for obtaining efficient algorithms.Concerning the first step of the general two-step approach, we note that, like the othertwo parameters, computing the cliquewidth of general graphs is
N P -hard [38]. However,graphs of cliquewidth 1, 2, or 3 can be recognised in polynomial time [24]. For k ≥
4, thereis a fixed-parameter-tractable algorithm that, given a graph of cliquewidth k , outputs a 2 k +1 expression.Concerning the second step, the first singly-exponential-time algorithm for Dominat-ing Set on clique decompositions of width k is an O ∗ (16 k )-time algorithm by Kobler andRotics [51]. The previously fastest algorithm for this problem has a running time of O ∗ (8 k ),obtained by transforming the problem to a problem on boolean decompositions [18]. In thispaper, we present a direct algorithm that runs in O ∗ (4 k ) time. We also show that one cancount the number of dominating sets of each given size at the cost of an extra polynomial fac-tor in the running time. Furthermore, we show that one can solve Independent DominatingSet in O ∗ (4 k ) and Total Dominating Set in O ∗ (4 k ) time.We note that there are other width parameters of graphs that potentially have lower val-ues than cliquewidth, for example rankwidth (see [58]) and booleanwidth (see [18]). Thesewidth parameters are related since a problem is fixed-parameter tractable parameterisedby cliquewidth if and only if it is fixed-parameter tractable parameterised by rankwidth orbooleanwidth [18, 58]. However, for many problems the best known running times for theseproblems are often much better as a function of the cliquewidth than as a function of therankwidth or booleanwidth. For example, dominating set can be solved on rank decomposi-tions of width k in O ∗ (2 k + k ) time [19, 44] and on boolean decompositions of width k in O (8 k ) time [18]. Optimality, Polynomial Factors.
We note that our results attain, or are very close to,intuitive natural lower bounds for the problems considered, namely a polynomial times theamount of space used by any dynamic programming algorithm for these problems on graphdecompositions. Similarly, it makes sense to think about the number of states necessary torepresent partial solutions as the best possible base of the exponent in the running time: thisequals the space requirements. Currently, this is O ∗ (3 k ) space for Dominating Set on treedecompositions and branch decompositions and O ∗ (4 k ) space on clique decompositions.Very recently, this intuition has been strengthened by a result of Lokshtanov et al. [55].They prove that it is impossible to improve the exponential part of the running time fora number of tree-decomposition-based algorithms that we present in this paper, unless the Strong Exponential-Time Hypothesis fails. That is, unless there exist an algorithm for thegeneral
Satisfiability problem running in O ((2 − ǫ ) n ) time, for any ǫ >
0. In particular,this holds for our tree decomposition based algorithms for
Dominating Set and
PartitionInto Triangles . e see that our algorithms on tree decompositions and clique decompositions all attainthis intuitive lower bound. On branch decompositions, we are very close. When the number ofstates that one would naturally use to represent partial solutions equals s , then our algorithmsrun in O ∗ ( s ω k ) time, where ω/ < . ω = 2, which could bethe true value of ω , our algorithms do attains this space bound.Because of these seemingly-optimal exponential factors in the running times of our al-gorithms, we spend quite some effort to make the polynomial factors involved as small aspossible. In order to improve these polynomial factors, we need to distinguish between differ-ent problems based on a technical property for each type of graph decomposition that we callthe de Fluiter property . This property is related to the concept of finite integer index [16, 29].Considering the polynomial factors of the running times of our algorithms sometimes leadsto seemingly strange situations when the matrix multiplication constant is involved. To seethis, notice that ω is defined as the smallest constant such that two n × n matrices canbe multiplied in O ( n ω ) time. Consequently, any polylogarithmic factor in the running timeof the corresponding matrix-multiplication algorithm disappears in an infinitesimal increaseof ω . These polylogarithmic factors are additional polynomial factors in the running times ofour algorithms on branch decompositions. In our analyses, we pay no extra attention to this,and we only explicitly give the polynomial factors involved that are not related to the timerequired to multiply matrices.Also, because many of our algorithms use numbers which require more than a constantamount of bits to represent (often n -bit numbers are involved), the time and space requiredto represent these numbers and perform arithmetic operations on these numbers affects thepolynomial factors in the running times of our algorithms. We will always include thesefactors and highlight them using a special notation ( i + ( n ) and i × ( n )). Model of Computation.
In this paper, we use the
Random Access Machine (RAM)model with O ( k )-bit word size [42] for the analysis of our algorithms. In this model, memoryaccess can be performed in constant time for memory of size O ( c k ), for any constant c .We consider addition and multiplication operations on O ( k )-bit numbers to be unit-timeoperations. For an overview of this model, see for example [45].We use this computational model because we do not want the table look-up operations toinfluence the polynomial factors of the running time. Since the tables have size O ∗ ( s k ), for aproblem specific integer s ≥
2, these operations are constant-time operations in this model.
Paper Organisation
This paper is organised as follows. We start with some preliminariesin Section 2. In Section 3, we present our results on tree decompositions. This is followed byour results on branch decompositions in Section 4 and clique decompositions in Section 5. Toconclude, we briefly discuss the relation between the de Fluiter properties and finite integerindex in Section 6. Finally, we give some concluding remarks in Section 7.
Let G = ( V, E ) be an n -vertex graph with m edges. We denote the open neighbourhood of avertex v by N ( v ) and the closed neighbourhood of a v by N [ v ], i.e., N ( v ) = { u ∈ V | { u, v } ∈ E } and N [ v ] = N ( v ) ∪ { v } . For a vertex subset U ⊆ V , we denote by G [ U ] the subgraphinduced by U , i.e., G [ U ] = ( U, E ∩ ( U × U )). We denote the powerset of a set S by 2 S .For a decomposition tree T , we often identify T with the set of nodes in T , and wewrite E ( T ) for the edges of T . We often associate a table with each node or each edge in adecomposition tree T . Such a table A can be seen as a function, while we write | A | for thesize of A , that is, the total space required to store all entries of A .We denote the time required to add and multiply n -bit numbers by i + ( n ) and i × ( n ),respectively. Currently, i × ( n ) = n log( n )2 O (log ∗ ( n )) due to F¨urer’s algorithm [43], and i + ( n ) = O ( n ). Note that i × ( k ) = i + ( k ) = O ( k ) due to the chosen model of computation. A dominating set in a graph G is a set of vertices D ⊆ V such that for every vertex v ∈ V \ D there exists a vertex d ∈ D with { v, d } ∈ E , i.e, S v ∈ D N [ v ] = V . A dominating set D in G is a σ Standard problem description { , , . . . } { } Independent Set { , , . . . } { , , . . . } Dominating Set { , } { } Strong Stable Set/2-Packing/Distance-2 Independent Set { } { } Perfect Code/Efficient Dominating Set { , , . . . } { } Independent Dominating Set { } { , , . . . } Perfect Dominating Set { , , . . . } { , , . . . } Total Dominating Set { } { } Total Perfect Dominating Set { , } { , , . . . } Nearly Perfect Set { , } { , } Total Nearly Perfect Set { } { , } Weakly Perfect Dominating Set { , , . . . } { , , . . . , p } Induced Bounded Degree Subgraph { p, p + 1 , . . . } { , , . . . } p -Dominating Set { , , . . . } { p } Induced p -Regular SubgraphTable 1: [ ρ, σ ]-domination problems (taken from [66, 67]). minimum dominating set if it is of minimum cardinality among all dominating sets in G . Theclassical N P -hard problem
Dominating Set asks to find the size of a minimum dominatingset in G . Given a (partial) solution D of Dominating Set , we say that a vertex d ∈ D dominates a vertex v if v ∈ N [ d ], and that a vertex v is undominated if N [ v ] ∩ D = ∅ . Besidesthe standard minimisation version of the problem, we also consider counting the number ofminimum dominating sets, and counting the number of dominating sets of each given size.A matching in G is a set of edges M ⊆ E such that no two edges are incident to thesame vertex. A vertex that is an endpoint of an edge in M is said to be matched to theother endpoint of this edge. A perfect matching is a matchings in which every vertex v ∈ V is matched. Counting the number of perfect matchings in a graph is a classical P -completeproblem [69].A [ ρ, σ ] -dominating set is a generalisation of a dominating set introduced by Telle in[66, 67]. Definition 2.1 ( [ ρ, σ ] -dominating set) Let ρ, σ ⊆ N . A [ ρ, σ ] -dominating set in a graph G is a subset D ⊆ V such that: • for every v ∈ V \ D : | N ( v ) ∩ D | ∈ ρ ; • for every v ∈ D : | N ( v ) ∩ D | ∈ σ . The [ ρ, σ ]-domination problems are the computational problems of finding [ ρ, σ ]-dominatingsets; see Table 1. Of these problems, we consider several variants: the [ ρ, σ ] -existence problems ask whether a [ ρ, σ ]-dominating set exists in a graph G ; the [ ρ, σ ] -minimisation and [ ρ, σ ] -maximisation problems ask for the minimum or maximum cardinality of a [ ρ, σ ]-dominatingset in a graph G ; and the [ ρ, σ ] -counting problems ask for the number of [ ρ, σ ]-dominatingsets in a graph G . In a [ ρ, σ ]-counting problem, we sometimes restrict ourselves to counting[ ρ, σ ]-dominating sets of minimum size, maximum size, or of each given size.Throughout this paper, we assume that ρ and σ are either finite or cofinite.Another type of problems we consider are clique covering, packing, and partitioning prob-lems. Because we want to give general results applying to many different problems, we willdefine a class of problems of our own: we define the notion of γ -clique covering , γ -cliquepacking , and γ -clique partitioning problems .We start by defining the γ -clique problems and note that their definitions somewhat re-semble the definition of [ ρ, σ ]-domination problems. Definition 2.2 ( γ -clique covering, packing, and partitioning) Let γ ⊆ N \ { } , let G be a graph, and let C be a collection of cliques from G such that the size of every clique in C is contained in γ . We define the following notions: • C is a γ -clique cover of G if C covers the vertices of G , i.e, S C ∈C C = V . problem type Standard problem description { , , . . . } partitioning, minimisation Minimum clique partition { } partitioning, counting Count perfect matchings { } covering Minimum triangle cover of vertices { } packing Maximum triangle packing { } partitioning Partition into triangles { p } partitioning Partition into p -cliques { , , , , . . . } covering Minimum cover by odd-cliquesTable 2: γ -clique covering, packing and partitioning problems. • C is a γ -clique packing of G if the cliques are disjoint, i.e, for any two C , C ∈ C : C ∩ C = ∅ . • C is a γ -clique partitioning of G if it is both a γ -clique cover and a γ -clique packing. The corresponding computational problems are defined in the following way. The γ - cliquecovering problems asks for the cardinality of the smallest γ -clique cover. The γ - clique packing problems asks for the cardinality of the largest γ -clique packing. The γ - clique partitioning problems asks whether a γ -clique partitioning exists. For these problems, we also considertheir minimisation, maximisation, and counting variants. See Table 2 for some concreteexample problems. We note that clique covering problems in the literature often ask to coverall the edges of a graph: here we cover only the vertices.Throughout this paper, we assume that γ is decidable in polynomial time, that is, forevery j ∈ N we can decide in time polynomial in j whether j ∈ γ . We consider dynamic programming algorithms on three different kinds of graph decomposi-tions, namely tree decompositions, branch decompositions, and clique decompositions.
The notions of a tree decomposition and treewidth were introduced by Robertson and Seymour[60] and measure the tree-likeness of a graph.
Definition 2.3 (tree decomposition) A tree decomposition of a graph G consists of atree T in which each node x ∈ T has an associated set of vertices X x ⊆ V (called a bag ) suchthat S x ∈ T X x = V and the following properties hold:1. for each { u, v } ∈ E , there exists a node x ∈ T such that { u, v } ∈ X x .2. if v ∈ X x and v ∈ X y , then v ∈ X z for all nodes z on the path from node x to node y in T . The width tw ( T ) of a tree decomposition T is the size of the largest bag of T minus one.The treewidth of a graph G is the minimum treewidth over all possible tree decompositionsof G . We note that the minus one in the definition exists to set the treewidth of trees to one.In this paper, we will always assume that tree decompositions of the appropriate width aregiven.Dynamic programming algorithms on tree decompositions are often presented on nice treedecompositions, which were introduced by Kloks [50]. We give a slightly different definitionof a nice tree decomposition. Definition 2.4 (nice tree decomposition) A nice tree decomposition is a tree decompo-sition with one special node z called the root with X z = ∅ and in which each node is one ofthe following types:1. Leaf node : a leaf x of T with X x = { v } for some vertex v ∈ V .2. Introduce node : an internal node x of T with one child node y ; this type of node has X x = X y ∪ { v } , for some v / ∈ X y . The node is said to introduce the vertex v . . Forget node : an internal node x of T with on child node y ; this type of node has X x = X y \ { v } , for some v ∈ X y . The node is said to forget the vertex v .4. Join node : an internal node x with two child nodes l and r ; this type of node has X x = X r = X l . We note that this definition is slightly different from the usual. In our definition, we have theextra requirements that a bag X x associated with a leaf x of T consists of a single vertex v ( X x = { v } ), and that the bag X z associated with the root node Z is empty ( X z = ∅ ).Given a tree decomposition consisting of O ( n ) nodes, a nice tree decomposition of equalwidth and also consisting of O ( n ) nodes can be found in O ( n ) time [50]. By adding a seriesof forget nodes to the old root, and by adding a series of introduce nodes below an old leafnode if its associated bag contains more than one vertex, we can easily modify any nice treedecomposition to have our extra requirements within the same running time.By fixing the root of T , we associate with each node x in a tree decomposition T a vertexset V x ⊆ V : a vertex v belongs to V x if and only if there exists a bag y with v ∈ X y such thateither y = x or y is a descendant of x in T . Furthermore, we associate with each node x of T the induced subgraph G x = G [ V x ] of G . I.e., G x is the following graph: G x = G h[ { X y | y = x or y is a descendant of x } i For an overview of tree decompositions and dynamic programming on tree decompositions,see [12, 48].
Branch decompositions are related to tree decompositions and also originate from the seriesof papers on graph minors by Robertson and Seymour [61].
Definition 2.5 (branch decomposition) A branch decomposition of a graph G is a tree T in which each internal node has degree three and in which each leaf x of T has an assignededge e x ∈ E such that this assignment is a bijection between the leaves of T and the edges E of G . If we would remove any edge e from a branch decomposition T of G , then this cuts T intotwo subtrees T and T . In this way, the edge e ∈ E ( T ) partitions the edges of G into twosets E , E , where E i contains exactly those edges in the leaves of subtree T i . The middleset X e associated to the edge e ∈ E ( T ) is defined to be the set of vertices X e ⊆ V that areboth an endpoint of an edge in the edge partition E and an endpoint of an edge in the edgepartition E , where E and E are associated with e . That is, if V i = S E i , then X e = V ∩ V .The width bw ( T ) of a branch decomposition T is the size of the largest middle set associatedwith the edges of T . The branchwidth bw ( G ) of a graph G is the minimum width overall possible branch decompositions of G . In this paper, we always assume that a branchdecomposition of the appropriate width is given.Observe that vertices v of degree one in G are not in any middle set of a branch decom-position T of G . Let u be the neighbour of such a vertex v . We include the vertex v in themiddle set of the edge e of T incident to the leaf of T that contains { u, v } . This raises thebranchwidth to max { , bw ( G ) } . Throughout this paper, we ignore this technicality.The treewidth tw ( G ) and branchwidth bw ( G ) of any graph are related in the followingway: Proposition 2.6 ([61])
For any graph G with branchwidth bw ( G ) ≥ : bw ( G ) ≤ tw ( G ) + 1 ≤ (cid:22) bw ( G ) (cid:23) To perform dynamic programming on a branch decomposition T , we need T to be rooted.To create a root, we choose any edge e ∈ E ( T ) and subdivide it, creating edges e and e and a new node y . Next, we create another new node z , which will be our root, and add ittogether with the new edge { y, z } to T . The middle sets associated with the edges created bythe subdivision are set to X e , i.e., X e = X e = X e . Furthermore, the middle set of the newedge { y, z } is the empty set: X { y,z } = ∅ . e use the following terminology on the edges in a branch decomposition T giving similarnames to edges as we would usually do to vertices. We call any edge of T that is incident toa leaf but not the root a leaf edge . Any other edge is called an internal edge . Let x be thelower endpoint of an internal edge e of T and let l , r be the other two edges incident to x .We call the edges l and r the child edges of e . Definition 2.7 (partitioning of middle sets)
For a branch decomposition T , let e ∈ E ( T ) be an edge not incident to a leaf with left child l ∈ E ( T ) and right child r ∈ E ( T ) . We definethe following partitioning of X e ∪ X l ∪ X r :1. The intersection vertices : I = X e ∩ X l ∩ X r .2. The forget vertices : F = ( X l ∩ X r ) \ I .3. The vertices passed from the left : L = ( X e ∩ X l ) \ I .4. The vertices passed from the right : R = ( X e ∩ X r ) \ I . Notice that this is a partitioning because any vertex in at least one of the sets X e , X l , X r must be in at least two of them by definition of a middle set.Because each bag has size at most k , the partitioning satisfies the properties: | I | + | L | + | R | ≤ k | I | + | L | + | F | ≤ k | I | + | R | + | F | ≤ k We associate with each edge e ∈ E ( T ) of a branch decomposition T the induced subgraph G e = G [ V e ] of G . A vertex v ∈ V belongs to V e in this definition if and only if there is amiddle set f with f = e or f below e in T with v ∈ X f . That is, v is in V e if and only if v isan endpoint of an edge associated with a leaf of T that is below e in T , i.e.: G e = G h[ { X f | f = e or f is below e in T } i For an overview of branch decomposition based techniques, see [48]. k -Expressions and Cliquewidth Another notion related to the decomposition of graphs is cliquewidth, introduced by Courcelleet al. [25].
Definition 2.8 ( k -expression) A k -expression is an expression combining any number ofthe following four operations on labelled graphs with labels { , , . . . , k } :1. create a new graph : create a new graph with one vertex having any label,2. relabel : relabel all vertices with label i to j ( i = j ),3. add edges : connect all vertices with label i to all vertices with label j ( i = j ),4. join graphs : take the disjoint union of two labelled graphs. The cliquewidth cw ( G ) of a graph G is defined to be the minimum k for which there existsa k -expression that evaluates to a graph isomorphic to G .The definition of a k -expression can also be turned into a rooted decomposition tree. Inthis decomposition tree T , leafs of the tree T correspond to the operations that create newgraphs, effectively creating the vertices of G , and internal vertices of T correspond to one ofthe other three above operations described above. We call this tree a clique decomposition of width k . In this paper, we always assume that a given decomposition of the appropriatewidth is given.In this paper, we also ssume that any k -expression does not contain superfluous operations,e.g., a k -expression does apply the operation to add edges between vertices with labels i and j twice in a row without first changing the sets of vertices with the labels i and j , and it doesnot relabel vertices with a given label or add edges between vertices with a given label if theset of vertices with such a label is empty. Under these conditions, it is not hard to see thatany k -expressions consists of at most O ( n ) join operations and O ( nk ) other operations.More information on solving problems on graphs of bounded cliquewidth can be found in[26]. .3 Fast Algorithms to Speed Up Dynamic Programming In this paper, we will use fast algorithms for two standard problems as subroutines to speed updynamic programming. These are fast multiplication of matrices, and fast subset convolution.
Fast Matrix Multiplication.
In this paper, we let ω be the smallest constant such thattwo n × n matrices can be multiplied in O ( n ω ) time; that is, ω is the matrix multiplicationconstant. Currently, ω < .
376 due to the algorithm by Coppersmith and Winograd [23].For multiplying an ( n × p ) matrix A and a ( p × n ) matrix B , we differentiate between p ≤ n and p > n . Under the assumption that ω = 2 . O ( n . p . ) time algorithm isknown if p ≤ n [23]. Otherwise, the matrices can be multiplied in O ( pn n ω ) = O ( pn ω − ) timeby matrix splitting: split the matrices A and B into pn many n × n matrices A , . . . A pn and B , . . . B pn , multiply each of the pn pairs A i × B i , and sum up the results. Fast Subset Convolution.
Given a set U and two functions f, g : 2 U → Z , their subsetconvolution ( f ∗ g ) is defined as follows:( f ∗ g )( S ) = X X ⊆ S f ( X ) g ( S \ X )The fast subset convolution algorithm by Bj¨orklund et al. can compute this convolution using O ( k k ) arithmetic operations [6].Similarly, Bj¨orklund et al. define the covering product ( f ∗ c g ) and the packing product ( f ∗ p g ) of f and g in the following way:( f ∗ c g )( S ) = X X,Y ⊆ SX ∪ Y = S f ( X ) g ( Y ) ( f ∗ p g )( S ) = X X,Y ⊆ SX ∩ Y = ∅ f ( X ) g ( Y )These products can be computed using O ( k k ) arithmetic operations [6].In this paper, we will not directly use the algorithms of Bj¨orklund et al. as subroutines.Instead, we present their algorithms based on what we will call state changes. The resultis exactly the same as using the algorithms by Bj¨orklund et al. as subroutines. We chooseto present our results in this way because it allows us to easily generalise the fast subsetconvolution algorithm to a more complex setting than functions with domain 2 U for someset U . Algorithms solving
N P -hard problems in polynomial time on graphs of bounded treewidthare often dynamic programming algorithms of the following form. The tree decomposition T is traversed in a bottom-up manner. For each node x ∈ T visited, the algorithm constructs atable with partial solutions on the subgraph G x , that is, the induced subgraph on all verticesthat are in a bag X y where y = x or y is a descendant of x in T . Let an extension of sucha partial solution be a solution on G that contains the partial solution on G x , and let twosuch partial solutions P , P have the same characteristic if any extension of P also is anextension of P and vice versa. The table for a node x ∈ T does not store all possible partialsolutions on G x : it stores a set of solutions such that it contains exactly one partial solutionfor each possible characteristic. While traversing the tree T , the table for a node x ∈ T iscomputed using the tables that had been constructed for the children of x in T .This type of algorithm typically has a running time of the form O ( f ( k ) poly ( n )) or even O ( f ( k ) n ), for an some function f that grows at least exponentially. This is because the sizeof the computed tables often is (at least) exponential in the treewidth k , but polynomial (oreven constant) in the size of the graph G . See Proposition 3.1 for an example algorithm.In this section, we improve the exponential part of running time for many dynamic pro-gramming algorithms on tree decompositions for a large class of problems. When the numberof partial solutions of different characteristics stored in the table is O ∗ ( s k ), previous algo-rithms typically run in time O ∗ ( r k ) for some r > s . This is because it is hard for thesealgorithms to compute a new table for a node in T with multiple children. In this case, the this vertex is not in the dominating set and has already been dominated.0 this vertex is not in the dominating set and has not yet been dominated.0 ? this vertex is not in the dominating set and may or may not be dominated.Table 3: Vertex states for the Dominating Set problem. algorithm often needs to inspect exponentially many combinations of partial solutions fromit children per entry of the new table. We will show that algorithms with a running time of O ∗ ( s k ) exist for many problems.This section is organised as follows. We start by setting up the framework that we use fordynamic programming on tree decompositions by giving a simple algorithm in Section 3.1.Here, we also define the de Fluiter property for treewidth. Then, we give our results on Dominating Set in Section 3.2, our results on counting perfect matchings in Section 3.3, ourresults on [ ρ, σ ]-domination problems in Section 3.4, and finally our results on the γ -cliquecovering, packing, and partitioning problems in Section 3.5. We will now give a simple dynamic programming algorithm for the
Dominating Set problem.This algorithm follows from standard techniques for treewidth-based algorithms, and we willgive faster algorithms later.
Proposition 3.1
There is an algorithm that, given a tree decomposition of a graph G ofwidth k , computes the size of a minimum dominating set in G in O ( n k i + (log( n ))) time. Proof:
First, we construct a nice tree decomposition T of G of width k from the given treedecomposition in O ( n ) time.Similar to Telle and Proskurowski [67], we introduce vertex states 1, 0 , and 0 thatcharacterise the ‘state’ of a vertex with respect to a vertex set D that is a partial solutionof the Dominating Set problem: v has state 1 if v ∈ D ; v has state 0 if v D but v isdominated by D , i.e., there is a d ∈ D with { v, d } ∈ E ; and, v has state 0 if v D and v isnot dominated by D ; see also Table 3.For each node x in the nice tree decomposition T , we consider partial solutions D ⊆ V x ,such that all vertices in V x \ X x are dominated by D . We characterise these sets D by thestates of the vertices in X x and the size of D . More precisely, we will compute a table A x withan entry A x ( c ) ∈ { , , . . . , n } ∪ {∞} for each c ∈ { , , } | X x | . We call c ∈ { , , } | X x | a colouring of the vertices in X x . A table entry A x ( c ) represents the size of the partialsolution D of Dominating Set in the induced subgraph G x associated with the node x of T that satisfies the requirements defined by the states in the colouring c , or infinity if no suchset exists. That is, the table entry gives the size of the smallest partial solution D in G x thatcontains all vertices in X x with state 1 in c and that dominates all vertices in G x except thosein X x with state 0 in c , or infinity if no such set exists. Notice that these 3 | X x | colouringscorrespond to 3 | X x | partial solutions with different characteristics, and that it contains apartial solution for each possible characteristic.We now show how to compute the table A x for the next node x ∈ T while traversing thenice tree decomposition T in a bottom-up manner. Depending on the type of the node x (seeDefinition 2.4), we do the following: Leaf node : Let x be a leaf node in T . The table consists of three entries, one for each possiblecolouring c ∈ { , , } of the single vertex v in X x . A x ( { } ) = 1 A x ( { } ) = ∞ A x ( { } ) = 0Here, A x ( c ) corresponds to the size of the smallest partial solution satisfying the requirementsdefined by the colouring c on the single vertex v . Introduce node : Let x be an introduce node in T with child node y . We assume that whenthe l -th coordinate of a colouring of X x represents a vertex u , then the same coordinate of a olouring of X y also represents u , and that the last coordinate of a colouring of X x representsthe newly introduced vertex v . Now, for any colouring c ∈ { , , } | X y | : A x ( c × { } ) = (cid:26) A y ( c ) if v has a neighbour with state 1 in c ∞ otherwise A x ( c × { } ) = (cid:26) A y ( c ) if v has no neighbour with state 1 in c ∞ otherwiseFor colourings with state 1 for the introduced vertex, we say that a colouring c x of X x matches a colouring c y of X y if: • For all u ∈ X y \ N ( v ): c x ( u ) = c y ( u ). • For all u ∈ X y ∩ N ( v ): either c x ( u ) = c y ( u ) = 1, or c x ( u ) = 0 and c y ( u ) ∈ { , } .Here, c ( u ) is the state of the vertex u in the colouring c . We compute A x ( c ) by the followingformula: A x ( c × { } ) = (cid:26) ∞ if c ( u ) = 0 for some u ∈ N ( v )1 + min { A y ( c ′ ) | c ′ matches c } otherwiseIt is not hard to see that A x ( c ) now corresponds to the size of the partial solution satisfyingthe requirements imposed on X x by the colouring c .. Forget node : Let x be a forget node in T with child node y . Again, we assume that whenthe l -th coordinate of a colouring of X x represents a vertex u , then the same coordinate of acolouring of X y also represents u , and that the last coordinate of a colouring of X y representsvertex v that we are forgetting. A x ( c ) = min { A y ( c × { } ) , A y ( c × { } ) } Now, A x ( c ) corresponds to the size of the smallest partial solution satisfying the requirementsimposed on X x by the colouring c as we consider only partial solutions that dominate theforgotten vertex. Join node : Let x be a join node in T and let l and r be its child nodes. As X x = X l = X r ,we can assume that the same coordinates represent the same vertices in a colouring of eachof the three bags.Let c x ( v ) be the state that represents the vertex v in colouring c x of X x . We say that threecolourings c x , c l , and c r of X x , X l , and X r , respectively, match if for each vertex v ∈ X x : • either c x ( v ) = c l ( v ) = c r ( v ) = 1, • or c x ( v ) = c l ( v ) = c r ( v ) = 0 , • or c x ( v ) = 0 while c l ( v ) and c r ( v ) are 0 or 0 , but not both 0 .Notice that three colourings c x , c l , and c r match if for each vertex v the requirements imposedby the states are correctly combined from the states in the colourings on both child bags c l and c r to the states in the colourings of the parent bag c x . That is, if a vertex is requiredby c x to be in the vertex set of a partial solution, then it is also required to be so in c l and c r ;if a vertex is required to be undominated in c x , then it is also required to be undominatedin c l and c r ; and, if a vertex is required to be not in the partially constructed dominating setbut it is required to be dominated in c x , then it is required not to be in the vertex sets of thepartial solutions in both c l and c r , but it must be dominated in one of both partial solutions.The new table A x can be computed by the following formula: A x ( c x ) = min c x ,c l ,c r match A l ( c l ) + A r ( c r ) − ( c x )Here, ( c ) stands for the number of 1-states in the colouring c . This number needs to besubtracted from the total size of the partial solution because the corresponding vertices arecounted in each entry of A l ( c l ) as well as in each entry of A r ( c r ). One can easily check thatthis gives a correct computation of A x .After traversing the nice tree decomposition T , we end up in the root node z ∈ T . As X z = ∅ and thus G z = G , we find the size of the minimum dominating set in G in the singleentry of A z . t is not hard to see that the algorithm stores the size of the smallest partial solution of Dominating Set in A x for each possible characteristic on X x for every node x ∈ T . Hence,the algorithm is correct.For the running time, observe that, for a leaf or forget node, O (3 | X x | i + (log( n ))) time isrequired since we work with log( n )-bit numbers. In an introduce node, we need more time aswe need to inspect multiple entries from A y to compute an entry of A x . For a vertex u outside N ( v ), we have three possible combinations of states, and for a vertex u ∈ N ( v ) we have fourpossible combinations we need to inspect: the table entry with c x ( u ) = c y ( u ) = 0 , colouringswith c x ( u ) = c y ( u ) = 1, and colourings with c x ( u ) = 0 while c y ( u ) = 0 or c y ( u ) = 0 .This leads to a total time of O (4 | X x | i + (log( n ))) for an introduce node. In a join node, fivecombinations of states need to be inspected per vertex requiring O (5 | X x | i + (log( n ))) time intotal. As the largest bag has size at most k + 1 and the tree decomposition T has O ( n ) nodes,the running time is O ( n k i + (log( n ))). (cid:3) Many of the details of the algorithm described in the proof of Proposition 3.1 also applyto other algorithms described in this section. We will not repeat these details: for the otheralgorithms we will only specify how to compute the tables A x for all four kinds of nodes.We notice that the above algorithm computes only the size of a minimum dominatingset in G , not the dominating set itself. To construct a minimum dominating set D , the treedecomposition T can be traversed in top-down order (reverse order compared to the algorithmof Proposition 3.1). We start by selecting the single entry in the table of the root node, andthen, for each child node y of the current node x , we select an the entry in A y which was usedto compute the selected entry of A x . More specifically, we select the entry that was eitherused to copy into the selected entry of A x , or we select one, or in a join node two, entries thatlead to the minimum that was computed for A x . In this way, we trace back the computationpath that computed the size of D . During this process, we construct D by adding each vertexthat is not yet in D and that has state 1 in c to D . As we only use colourings that lead to aminimum dominating set, this process gives us a minimum dominating set in G .Before we give a series of new, fast dynamic programming algorithms for a broad rangeof problems, we need the following definition. We use it to improve the polynomial factorsinvolved in the running times of the algorithms in this section. Definition 3.2 (de Fluiter property for treewidth)
Given a graph optimisation prob-lem Π , consider a method to represent the different characteristics of partial solutions used inan algorithm that performs dynamic programming on tree decomposition to solve Π . Such arepresentation of partial solutions has the de Fluiter property for treewidth if the differencebetween the objective values of any two partial solutions of Π that are associated with a dif-ferent characteristic and can both still be extended to an optimal solution is at most f ( k ) , forsome non-negative function f that depends only on the treewidth k . This property is named after Babette van Antwerpen-de Fluiter, as this property implicitlyplay an important role in her work reported in [16, 29]. Note that although we use the value ∞ in our dynamic programming tables, we do not consider such entries since they can neverbe extended to an optimal solution. Hence, these entries do not influence the de Fluiterproperty. Furthermore, we say that a problem has the linear de Fluiter property for treewidth if f is a linear function in k .Consider the representation used in Proposition 3.1 for the Dominating Set problem.This representation has the de Fluiter property for treewidth with f ( k ) = k + 1 because anytable entry that is more than k + 1 larger than the smallest value stored in the table cannotlead to an optimal solution. This holds because any partial solution of Dominating Set D that is more than k + 1 larger than the smallest value stored in the table cannot be part ofa minimum dominating set. Namely, we can obtain a partial solution that is smaller than D and that dominates the same vertices or more by taking the partial solution corresponding tothe smallest value stored in the table and adding all vertices in X x to it.For a discussion of the de Fluiter properties and their relation to the related property finite integer index , see Section 6. .2 Minimum Dominating Set Alber et al. showed that one can improve the straightforward result of Proposition 3.1 bychoosing a different set of states to represent characteristics of partial solutions [1, 2]: theyobtained an O ∗ (4 k )-time algorithm using the set of states { , , ? } (see Table 3). We obtainan O ∗ (3 k )-time algorithm by using yet another set of states, namely { , , ? } .Note that 0 ? represents a vertex v that is not in the vertex set D of a partial solution of Dominating Set , while we do not specify whether v is dominated; i.e., given D , vertices withstate 0 and with state 0 could also have state 0 ? . In particular, there is no longer a uniquecolouring of X x with states for a specific partial solution: a partial solution can correspondto several such colourings. Below, we discuss in detail how we can handle this situation andhow it can lead to faster algorithms.Since the state 0 represents an undominated vertex and the state 0 ? represents a vertexthat may or may not be dominated, one may think that it is impossible to guarantee that avertex is dominated using these states. We circumvent this problem by not just computing the size of a minimum dominating set, but by computing the number of dominating sets of eachfixed size κ with 0 ≤ κ ≤ n . This approach does not store (the size of) a solution per char-acteristic of the partial solutions, but counts the number of partial solutions of each possiblesize per characteristic. We note that the algorithm of Proposition 3.1 can straightforwardlybe modified to also count the number of (minimum) dominating sets.For our next algorithm, we use dynamic programming tables in which an entry A x ( c, κ )represents the number of partial solutions of Dominating Sets on G x of size exactly κ thatsatisfy the requirements defined by the states in the colouring c . That is, the table entriesgive the number of partial solutions in G x of size κ that dominate all vertices in V x \ X x andall vertices in X x with state 0 , and that do not dominate the vertices in X x with state 0 .This approach leads to the following result. Theorem 3.3
There is an algorithm that, given a tree decomposition of a graph G of width k ,computes the number of dominating sets in G of each size κ , ≤ κ ≤ n , in O ( n k i × ( n )) time. Proof:
We will show how to compute the table A x for each type of node x in a nice treedecomposition T . Recall that an entry A x ( c, κ ) counts the number of partial solution of Dominating Set of size exactly κ in G x satisfying the requirements defined by the states inthe colouring c . Leaf node : Let x be a leaf node in T with X x = { v } . We compute A x in the following way: A x ( { } , κ ) = (cid:26) κ = 10 otherwise A x ( { } , κ ) = (cid:26) κ = 00 otherwise A x ( { ? } , κ ) = (cid:26) κ = 00 otherwiseNotice that this is correct since there is exactly one partial solution of size one that contains v ,namely { v } , and exactly one partial solution of size zero that does not contain v , namely ∅ . Introduce node : Let x be an introduce node in T with child node y that introduces the vertex v ,and let c ∈ { , , } | X y | . We compute A x in the following way: A x ( c × { } , κ ) = v has a neighbour with state 0 in c κ = 0 A y ( c, κ −
1) otherwise A x ( c × { } , κ ) = (cid:26) v has a neighbour with state 1 in cA y ( c, κ ) otherwise A x ( c × { ? } , κ ) = A y ( c, κ )As the state 0 ? is indifferent about domination, we can copy the appropriate value from A y .With the other two states, we have to set A x ( c, κ ) to zero if a vertex with state 0 can bedominated by a vertex with state 1. Moreover, we have to update the size of the set if v getsstate 1. orget node : Let x be a forget node in T with child node y that forgets the vertex v . Wecompute A x in the following way: A x ( c, κ ) = A y ( c × { } , κ ) + A y ( c × { ? } , κ ) − A y ( c × { } , κ )The number of partial solutions of size κ in G x satisfying the requirements defined by c equalsthe number of partial solutions of size κ that contain v plus the number of partial solutionsof size κ that do not contain v but where v is dominated. This last number can be computedby subtracting the number of such solutions in which v is not dominated (state 0 ) from thetotal number of partial solutions in which v may be dominated or not (state 0 ? ). This showsthe correctness of the above formula.The computation in the forget node is a simple illustration of the principle of inclu-sion/exclusion and the related M¨obius transform; see for example [7]. Join node : Let x be a join node in T and let l and r be its child nodes. Recall that X x = X l = X r .If we are using the set of states { , , ? } , then we do not have the consider colouringswith matching states in order to compute the join. Namely, we can compute A x using thefollowing formula: A x ( c, κ ) = X κ l + κ r = κ + ( c ) A l ( c, κ l ) · A r ( c, κ r )The fact that this formula does not need to consider multiple matching colourings per colour-ing c (see Proposition 3.1) is the main reason why the algorithm of this theorem is faster thanprevious results.To see that the formula is correct, recall that any partial solution of Dominating Set on G x counted in the table A x can be constructed from combining partial solutions G l and G r that are counted in A l and A r , respectively. Because an entry in A x where a vertex v thathas state 1 in a colouring of X x counts partial solutions with v in the vertex set of the partialsolution, this entry must count combinations of partial solutions in A l and A r where thisvertex is also in the vertex set of these partial solutions and thus also has state 1. Similarly,if a vertex v has state 0 , we count partial solutions in which v is undominated; hence v must be undominated in both partial solutions we combine and also have state 0 . And, ifa vertex v has state 0 ? , we count partial solutions in which v is not in the vertex set of thepartial solution and we are indifferent about domination; hence, we can get all combinationsof partial solutions from G l and G r if we also are indifferent about domination in A l and A r which is represented by the state 0 ? . All in all, if we fix the sizes of the solutions from G l and G r that we use, then we only need to multiply the number of solutions from A r and A l of this size which have the same colouring on X x . The formula is correct as it combines allpossible combinations by summing over all possible sizes of solutions on G l and G r that leadto a solution on G x of size κ . Notice that the term ( c ) under the summation sign correctsthe double counting of the vertices with state 1 in c .After the execution of this algorithm, the number of dominating sets of G of size κ can befound in the table entry A z ( ∅ , κ ), where z is the root of T .For the running time, we observe that in a leaf, introduce, or forget node x , the timerequired to compute A x is linear in the size of the table A x . The computations involve n -bitnumbers because there can be up to 2 n dominating sets in G . Since c ∈ { , , ? } | X x | and0 ≤ κ ≤ n , we can compute each table A x in O ( n k i + ( n )) time. In a join node x , we have toperform O ( n ) multiplications to compute an entry of A x . This gives a total of O ( n k i × ( n ))time per join node. As the nice tree decomposition has O ( n ) nodes, the total running time is O ( n k i × ( n )). (cid:3) The algorithm of Theorem 3.3 is exponentially faster in the treewidth k compared tothe previous fastest algorithm of Alber et al. [1, 2]. Also, no exponentially faster algorithmexists unless the Strong Exponential-Time Hypothesis fails [55]. The exponential speed-upcomes from the fact that we use a different set of states to represent the characteristics ofpartial solutions: a set of states that allows us to perform the computations in a join nodemuch faster. We note that although the algorithm of Theorem 3.3 uses underlying ideas ofthe covering product of [6], no transformations associated with such an algorithm are useddirectly. × ? ? ? × ? ? ? Figure 1: Join tables for the
Dominating Set problem. From left to right they correspond toProposition 3.1, the algorithm from [1, 2], and Theorem 3.3.
To represent the characteristics of the partial solutions of the
Dominating Set problem,we can use any of the following three sets of states: { , , } , { , , ? } , { , , ? } . De-pending on which set we choose, the number of combinations that we need to inspect in ajoin node differ. We give an overview of this in Figure 1: each table represents a join usinga different set of states, and each state in an entry of such a table represents a combinationof the states in the left and right child nodes that need to be inspected to the create thisnew state. The number of non-empty entries now shows how many combinations have to beconsidered per vertex in a bag of a join node. Therefore, one can easily see that a table ina join node can be computed in O ∗ (5 k ), O ∗ (4 k ), and O ∗ (3 k ) time, respectively, dependingon the set of states used. These tables correspond to the algorithm of Proposition 3.1, thealgorithm of Alber et al. [1, 2], and the algorithm of Theorem 3.3, respectively.The way in which we obtain the third table in Figure 1 from the first one reminds usof Strassen’s algorithm for matrix multiplication [65]: the speed-up in this algorithm comesfrom the fact that one multiplication can be omitted by using a series of extra additions andsubtractions. Here, we do something similar by adding up all entries with a 0 -state or 0 -state together in the 0 ? -state and computing the whole block of four combinations at once.We then reconstruct the values we need by subtracting to combinations with two 0 -states.The exponential speed-up obtained by the algorithm of Theorem 3.3 comes at the costof extra polynomial factors in the running time. This is n times the factor due to the factthat we work with n -bit numbers. Since we compute the number of dominating sets of eachsize κ , 0 ≤ κ ≤ n , instead of computing a minimum dominating set, some extra polynomialfactors in n seem unavoidable. However, the ideas of Theorem 3.3 can also be used to countonly minimum dominating sets Using that Dominating Set has the de Fluiter property fortreewidth, this leads to the following result, where the factor n is replaced by the muchsmaller factor k . Corollary 3.4
There is an algorithm that, given a tree decomposition of a graph G of width k ,computes the number of minimum dominating sets in G in O ( nk k i × ( n )) time. Proof:
We notice that the representation of the different characteristics of partial solutionsused in Theorem 3.3 has the linear de Fluiter property when used to count the number of min-imum dominating sets. More explicitly, when counting the number of minimum dominatingsets, we need to store only the number of partial solutions of each different characteristic thatare at most k + 1 larger in size than the smallest partial solution with a non-zero entry. Thisholds, as larger partial solutions can never lead to a minimum dominating set since takingany set corresponding to this smallest non-zero entry and adding all vertices in X x leads to asmaller partial solution that dominates at least the same vertices.In this way, we can modify the algorithm of Theorem 3.3 such that, in each node x ∈ T ,we store a number ξ x representing the size of the smallest partial solution and a table A x with the number of partial solutions A x ( c, κ ) with ξ x ≤ κ ≤ ξ x + k + 1.In a leaf node x , we simply set ξ x = 0. In an introduce or forget node x with child node y ,we first compute the entries A x ( c, κ ) for ξ y ≤ κ ≤ ξ y + k + 1 and then set ξ x to the value of κ corresponding to the smallest non-zero entry of A x . While computing A x , the algorithm uses A y ( c, κ ) = 0 for any entry A y ( c, κ ) that falls outside the given range of κ . Finally, in a joinnode x with child nodes r and l , we do the same as in Theorem 3.3, but we compute onlythe entries with κ in the range ξ l + ξ r − ( k + 1) ≤ κ ≤ ξ l + ξ r + ( k + 1). Furthermore, as allterms of the sum with κ l or κ r outside the range of A l and A r evaluate to zero, we now haveto evaluate only O ( k ) terms of the sum. It is not hard to see that all relevant combinationsof partial solutions from the two child nodes l and r fall in this range of κ . he modified algorithm computes O ( n ) tables of size O ( k k ), and the computation of eachentry requires at most O ( k ) multiplications of n -bit numbers. Therefore, the running time is O ( nk k i × ( n )). (cid:3) A disadvantage of the direct use of the algorithm of Corollary 3.4 compared to Proposi-tion 3.1 is that we cannot reconstruct a minimum dominating set in G by directly tracingback the computation path that gave us the size of a minimum domination set. However,as we show below, we can transform the tables computed by Theorem 3.3 and Corollary 3.4that use the states { , , ? } in O ∗ (3 k ) time into tables using any of the other sets of states.These transformations have two applications. First of all, they allow us to easily constructa minimum dominating set in G from the computation of Corollary 3.4 by transforming thecomputed tables into different tables as used in Proposition 3.1 and thereafter traverse thetree in a top-down fashion as we have discussed earlier. Secondly, they can be used to switchfrom using n -bit numbers to O ( k )-bit numbers, further improving the polynomial factors ofthe running time if we are interested only in solving the Dominating Set problem.
Lemma 3.5
Let x be a node of a tree decomposition T and let A x be a table with entries A x ( c, κ ) representing the number of partial solutions of Dominating Set of G x of each size κ ,for some range of κ , corresponding to each colouring c of the bag X x with states from one ofthe following sets: { , , } { , , ? } { , , ? } (see Table 3)The information represented in the table A x does not depend on the choice of the set of statesfrom the options given above. Moreover, there exist transformations between tables usingrepresentations with different sets of states using O ( | X x || A x | ) arithmetic operations. Proof:
We will transform A x such that it represents the same information using a differentset of states. The transformation will be given for fixed κ and can be repeated for each κ inthe given range.The transformations work in | X x | steps. In step i , we assume that the first i − c in our table A x use the initial set of states, and the last | X x | − i coordinatesuse the set of states to which we want to transform. Using this as an invariant, we changethe set of states used for the i -th coordinate at step i .Transforming from { , , } to { , , ? } can be done using the following formula inwhich A x ( c, κ ) represents our table for colouring c , c is a subcolouring of size i − { , , } , and c is a subcolouring of size | X x | − i using states { , , ? } . A x ( c × { ? } × c , κ ) = A x ( c × { } × c , κ ) + A x ( c × { } × c , κ )We keep entries with states 1 and 0 on the i -th vertex the same, and we remove entrieswith state 0 on the i -th vertex after computing the new value. In words, the above formulacounts the number partial solutions that do not containing the i -th vertex v in their vertexsets by adding the number of partial solutions that do not contain v in their vertex sets anddominate it to the number of partial solutions that do not contain v in the vertex sets anddo not dominate it. This completes the description of the transformation.To see that the new table contains the same information, we can apply the reverse transfor-mation from the set of states { , , ? } to the set { , , } by using the same transformationwith a different formula to introduce the new state: A x ( c × { } × c , κ ) = A x ( c × { ? } × c , κ ) − A x ( c × { } × c , κ )A similar argument applies here: the number of partial solutions that dominate but do notcontain the i -th vertex v in their vertex sets equals the total number of partial solutions thatdo not contain v in their vertex sets minus the number of partial solutions in which v isundominated.The other four transformations work similarly. Each transformation keeps the entries ofone of the three states 0 , 0 , and 0 ? intact, computes the entries for the new state by acoordinate-wise addition or subtraction of the other two states, and removes the entries usingthe third state from the table. To compute an entry with the new state, either the above two ormula can be used if the new state is 0 or 0 ? , or the following formula can be used if thenew state is 0 : A x ( c × { } × c , κ ) = A x ( c × { ? } × c , κ ) − A x ( c × { } × c , κ )For the above transformations, we need | X x | additions or subtractions for each of the | A x | table entries. Hence, a transformation requires O ( | X x || A x | ) arithmetic operations. (cid:3) We are now ready to give our final improvement for
Dominating Set . Corollary 3.6
There is an algorithm that, given a tree decomposition of a graph G of width k ,computes the size of a minimum dominating set in G in O ( nk k ) time. We could give a slightly shorter proof than the one given below. This proof would directlycombine the algorithm of Proposition 3.1 with the ideas of Theorem 3.3 using the transforma-tions from Lemma 3.5. However, combining our ideas with the computations in the introduceand forget nodes in the algorithm of Alber et al. [1, 2] gives a more elegant solution, whichwe prefer to present.
Proof:
On leaf, introduce, and forget nodes, our algorithm is exactly the same as the algo-rithm of Alber et al. [1, 2], while on a join node it is similar to Corollary 3.4. We give the fullalgorithm for completeness.For each node x ∈ T , we compute a table A x with entries A x ( c ) containing the size of asmallest partial solution of Dominating Set that satisfies the requirements defined by thecolouring c using the set of states { , , ? } . Leaf node : Let x be a leaf node in T . We compute A x in the following way: A x ( { } ) = 1 A x ( { } ) = ∞ A x ( { ? } ) = 0 Introduce node : Let x be an introduce node in T with child node y introducing the vertex v .We compute A x in the following way: A x ( c × { } ) = (cid:26) A y ( c ) if v has a neighbour with state 1 in c ∞ otherwise A x ( c × { ? } ) = A y ( c ) A x ( c × { } ) = 1 + A y ( φ N ( v ):0 → ? ( c ))Here, φ N ( v ):0 → ? ( c ) is the colouring c with every occurrence of the state 0 on a vertex in N ( v ) replaced by the state 0 ? . Forget node : Let x be a forget node in T with child node y forgetting the vertex v . Wecompute A x in the following way: A x ( c ) = min { A y ( c × { } ) , A y ( c × { } ) } Correctness of the operations on a leaf, introduce, and forget node are easy to verify andfollow from [1, 2].
Join node : Let x be a join node in T and let l and r be its child nodes. We first create twotables A ′ l and A ′ r . For y ∈ { l, r } , we let ξ y = min n A y ( c ′ ) | c ′ ∈ { , , ? } | X y | o and let A ′ y have entries A ′ y ( c, κ ) for all c ∈ { , , ? } | X y | and κ with ξ y ≤ κ ≤ ξ y + k + 1: A ′ y ( c, κ ) = (cid:26) A y ( c ) = κ A ′ l and A ′ r , we use Lemma 3.5 to transform the tables A ′ l and A ′ r such that they use colourings c with states from the set { , , ? } . The initial tables A ′ y donot contain the actual number of partial solutions; they contain a 1-entry if a correspondingpartial solution exists. In this case, the tables obtained after the transformation count thenumber 1-entries in the tables before the transformation. In the table A ′ x computed for thejoin node x , we now count the number of combinations of these 1-entries. This suffices sinceany smallest partial solution in G x that is obtained by joining partial solutions from bothchild nodes must consist of minimum solutions in G l and G r . e can compute A ′ x by evaluating the formula for the join node in Theorem 3.3 for all κ with ξ l + ξ r − ( k + 1) ≤ κ ≤ ξ l + ξ r + ( k + 1) using the tables A ′ l and A ′ r . If we do this in thesame way as in Corollary 3.4, then we consider only the O ( k ) terms of the formula where κ l and κ r fall in the specified ranges for A l and A r , respectively, as other terms evaluate tozero. In this way, we obtain the table A ′ x in which entries are marked by colourings withstates from the set { , , ? } . Finally, we use Lemma 3.5 to transform the table A ′ x suchthat it again uses colourings with states from the set { , , ? } . This final table gives thenumber of combinations of 1-entries in A l and A r that lead to partial solutions of each sizethat satisfy the associated colourings. Since we are interested only in the size of the smallestpartial solution of Dominating Set of each characteristic, we can extract these values in thefollowing way: A x ( c ) = min { κ | A ′ x ( c, κ ) ≥ ξ l + ξ r − ( k + 1) ≤ κ ≤ ξ l + ξ r + ( k + 1) } For the running time, we first consider the computations in a join node. Here, each statetransformation requires O ( k k ) operations by Lemma 3.5 since the tables have size O ( k k ).These operations involve O ( k )-bit numbers since the number of 1-entries in A l and A r isat most 3 k +1 . Evaluating the formula that computes A ′ x from the tables A ′ l and A ′ r costs O ( k k ) multiplications. If we do not store a log( n )-bit number for each entry in the tables A x in any of the four kinds of nodes of T , but store only the smallest entry using a log( n )-bitnumber and let A ′ x contain the difference to this smallest entry, then all entries in any ofthe A ′ x can also be represented using O ( k )-bit numbers. Since there are O ( n ) nodes in T ,this gives a running time of O ( nk k ). Note that the time required to multiply the O ( k )-bitnumbers disappears in the computational model with O ( k )-bit word size that we use. (cid:3) Corollary 3.6 gives the currently fastest algorithm for
Dominating Set on graphs givenwith a tree decomposition of width k . Essentially, what the algorithm does is fixing the 1-states and applying the covering product of Bj¨orklund et al. [6] on the 0 -states and 0 ? -states,where the 0 -states need to be covered by the same states from both child nodes. We choseto present our algorithm in a way that does not use the covering product directly, becausereasoning with states allows us to generalise our results in Section 3.4.We conclude by stating that we can directly obtain similar results for similar problemsusing exactly the same techniques: Proposition 3.7
For each of the following problems, there is an algorithm that solves them,given a tree decomposition of a graph G of width k , using the following running times: • Independent Dominating Set in O ( n k ) time, • Total Dominating Set in O ( nk k ) time, • Red-Blue Dominating Set in O ( nk k ) time, • Partition Into Two Total Dominating Sets in O ( n k ) time. Proof: (Sketch)
Use the same techniques as in the rest of this subsection. We emphasiseonly the following details.With
Independent Dominating Set , the factor n comes from the fact that this (min-imisation) problem does not have the de Fluiter property for treewidth. However, we can stilluse O ( k )-bit numbers. This holds because, even though the expanded tables A ′ l and A ′ r havesize at most n k , they still contain the value one only once for each of the 3 k characteristicsbefore applying the state changes. Therefore, the total sum of the values in the table, andthus also the maximum value of an entry in these tables after the state transformations is 3 k ;these can be represented by O ( k )-bit numbers.With Total Dominating Set , the running time is linear in n while the extra polyno-mial factor is k . This is because this problem does have the linear de Fluiter property fortreewidth.With Red-Blue Dominating Set , an exponential factor of 2 k suffices as we can use twostates for the red vertices (in the red-blue dominating set or not) and two different states forthe blue vertices (dominated or not).With Partition Into Two Total Dominating Sets , we note that we can restrictourselves to using six states when we modify the tree decomposition such that every vertexalways has at least one neighbour and hence is always dominated by at least one of the two × ?Figure 2: Join tables for counting the number of perfect matchings. We used the symbol ? in thelast table because the direct combination of two ?-states can lead to matching a vertex twice. partitions. Furthermore, the polynomial factors are smaller because this is not an optimisationproblem and we do not care about the sizes of both partitions. (cid:3) The next problem we consider is the problem of computing the number of perfect matchingsin a graph. We give an O ∗ (2 k )-time algorithm for this problem. This requires a slightly morecomplicated approach than the approach of the previous section. The main difference is thathere every vertex needs to be matched exactly once, while previously we needed to dominateevery vertex at least once. After introducing state transformations similar to Lemma 3.5, wewill introduce some extra counting techniques to overcome this problem.The obvious tree-decomposition-based dynamic programming algorithm uses the set ofstates { , } , where 1 means this vertex is matched and 0 means that it is not. It thencomputes, for every node x ∈ T , a table A x with entries A x ( c ) containing the number ofmatchings in G x with the property that the only vertices that are not matched are exactlythe vertices in the current bag X x with state 0 in c . This algorithm will run in O ∗ (3 k ) time;this running time can be derived from the join table in Figure 2. Similar to Lemma 3.5 inthe previous section, we will prove that the table A x contains exactly the same informationindependent of whether we use the set of states { , } or { , ? } , where ? represents a vertexfor which we do not specify whether it is matched or not. I.e., for a colouring c , we count thenumber of matchings in G x , where all vertices in V x \ X x and all vertices in X x with state 1in c are matched, all vertices in X x with state 0 in c are unmatched, and all vertices in X x with state ? can either be matched or not. Lemma 3.8
Let x be a node of a tree decomposition T and let A x be a table with entries A x ( c ) representing the number of matchings in G x matching all vertices in V x \ X x and correspondingto each colouring c of the bag X x with states from one of the following sets: { , } { , ? } { , ? } The information represented in the table A x does not depend on the choice of the set of statesfrom the options given above. Moreover, there exist transformations between tables usingrepresentations with different sets of states using O ( | X x || A x | ) arithmetic operations. If one defines a vertex with state 1 or ? to be in a set S , and a vertex with state 0 not to bein S , then the state changes essentially are M¨obius transforms and inversions, see [6]. Thetransformations in the proof below essentially are the fast evaluation algorithms from [6]. Proof:
The transformations work almost identical to those in the proof of Lemma 3.5. Instep 1 ≤ i ≤ | X x | , we assume that the first i − c in our tableuse one set of states, and the last | X x | − i coordinates use the other set of states. Using thisas an invariant, we change the set of states used for the i -th coordinate at step i .Transforming from { , } to { , ? } or { , ? } can be done using the following formula. Inthis formula, A x ( c ) represents our table for colouring c , c is a subcolouring of size i − { , } , and c is a subcolouring of size | X x | − i using states { , ? } : A x ( c × { ? } × c ) = A x ( c × { } × c ) + A x ( c × { } × c )In words, the number of matchings that may contain some vertex v equals the sum of thenumber of matchings that do and the number of matchings that do not contain v . he following two similar formulas can be used for the other four transformations: A x ( c × { } × c ) = A x ( c × { ? } × c ) − A x ( c × { } × c ) A x ( c × { } × c ) = A x ( c × { ? } × c ) − A x ( c × { } × c )In these transformations, we need | X x | additions or subtractions for each of the | A x | tableentries. Hence, a transformation requires O ( | X x || A x | ) arithmetic operations. (cid:3) Although we can transform our dynamic programming tables such that they use differentsets of states, this does not directly help us in obtaining a faster algorithm for counting thenumber of perfect matchings. Namely, if we would combine two partial solutions in whicha vertex v has the ?-state in a join node, then it is possible that v is matched twice in thecombined solution: once in each child node. This would lead to incorrect answers, and thisis why we put a ? instead of a ? in the join table in Figure 2. We overcome this problem byusing some additional counting tricks that can be found in the proof below. Theorem 3.9
There is an algorithm that, given a tree decomposition of a graph G of width k ,computes the number of perfect matchings in G in O ( nk k i × ( k log( n ))) time. Proof:
For each node x ∈ T , we compute a table A x with entries A x ( c ) containing thenumber of matchings that match all vertices in V x \ X x and that satisfy the requirementsdefined by the colouring c using states { , } . We use the extra invariant that vertices withstate 1 are matched only with vertices outside the bag, i.e., vertices that have already beenforgotten by the algorithm. This prevents vertices being matched within the bag and greatlysimplifies the presentation of the algorithm. Leaf node : Let x be a leaf node in T . We compute A x in the following way: A x ( { } ) = 0 A x ( { } ) = 1The only matching in the single vertex graph is the empty matching. Introduce node : Let x be an introduce node in T with child node y introducing the vertex v .The invariant on vertices with state 1 makes the introduce operation trivial: A x ( c × { } ) = 0 A x ( c × { } ) = A y ( c ) Forget node : Let x be a forget node in T with child node y forgetting the vertex v . If thevertex v is not matched already, then it must be matched to an available neighbour at thispoint: A x ( c ) = A y ( c × { } ) + X u ∈ N ( v ) ,c ( u )=1 A y ( φ u :1 → ( c ) × { } )Here, c ( u ) is the state of u in c and φ u :1 → ( c ) is the colouring c where the state of u is changedfrom 1 to 0. This formula computes the number of matchings corresponding to c , by addingthe number of matchings in which v is matched already to the number of matchings of allpossibly ways of matching v to one of its neighbours. We note that, because of our extrainvariant, we have to consider only neighbours in the current bag X x . Namely, if we wouldmatch v to an already forgotten vertex u , then we could have matched v to u in the nodewhere u was forgotten. Join node : Let x be a join node in T and let l and r be its child nodes.The join is the most interesting operation. As discussed before, we cannot simply changethe set of states to { , ? } and perform the join similar to Dominating Set as suggested byTable 2. We use the following method: we expand the tables and index them by the numberof matched vertices in X l or X r , i.e., the number of vertices with state 1. Let y ∈ { l, r } , thenwe compute tables A ′ l and A ′ r as follows: A ′ y ( c, i ) = (cid:26) A y ( c ) if ( c ) = i A ′ y to { , ? } using Lemma 3.8. Thesetables do not use state 1, but are still indexed by the number of 1-states used in the previous epresentation. Then, we join the tables by combining all possibilities that arise from i { , } (stored in the index i ) using the followingformula: A ′ x ( c, i ) = X i l + i r = i A ′ l ( c, i l ) · A ′ r ( c, i r )As a result, the entries A ′ x ( c, i ) give us the total number of ways to combine partial solutionsfrom G l and G r such that the vertices with state 0 in c are unmatched, the vertices withstate ? in c can be matched in zero, one, or both partial solutions used, and the total numberof times the vertices with state ? are matched is i .Next, we change the states in the table A ′ x back to { , } using Lemma 3.8. It is importantto note that the 1-state can now represent a vertex that is matched twice because the ?-stateused before this second transformation represented vertices that could be matched twice aswell. However, we can find those entries in which no vertex is matched twice by applying thefollowing observation: the total number of 1-states in c should equal the sum of those in itschild tables, and this sum is stored in the index i . Therefore, we can extract the number ofperfect matchings for each colouring c using the following formula: A x ( c ) = A ′ x ( c, ( c ))In this way, the algorithm correctly computes the tables A x for a join node x ∈ T . Thiscompletes the description of the algorithm.The computations in the join nodes again dominate the running time. In a join node, thetransformations of the states in the tables cost O ( k k ) arithmetic operations each, and thecomputations of A ′ x from A ′ l and A ′ r also costs O ( k k ) arithmetic operations. We will nowshow that these arithmetic operations can be implemented using O ( k log( n ))-bit numbers.For every vertex, we can say that the vertex is matched to another vertex at the time whenit is forgotten in T , or when its matching neighbour is forgotten. When it is matched at thetime that it is forgotten, then it is matched to one of its at most k + 1 neighbours. This leadsto at most k + 2 choices per vertex. As a result, there are at most O ( k n ) perfect matchingsin G , and the described operations can be implemented using O ( k log( n ))-bit numbers.Because a nice tree decomposition has O ( n ) nodes, the running time of the algorithm is O ( nk k i × ( k log( n ))). (cid:3) The above theorem gives the currently fastest algorithm for counting the number of perfectmatchings in graphs with a given tree decompositions of width k . The algorithm uses ideasfrom the fast subset convolution algorithm of Bj¨orklund et al. [6] to perform the computationsin the join node. [ ρ, σ ] -Domination Problems We have shown how to solve two elementary problems in O ∗ ( s k ) time on graphs of treewidth k ,where s is the number of states per vertex used in representations of partial solutions. In thissection, we generalise our result for Dominating Set to the [ ρ, σ ]-domination problems.We show that we can solve all [ ρ, σ ]-domination problems with finite or cofinite ρ and σ in O ∗ ( s k ) time. This includes the existence (decision), minimisation, maximisation, and countingvariants of these problems.For the [ ρ, σ ]-domination problems, one can also use colourings with states to representthe different characteristics of partial solutions. Let D be the vertex set of a partial solutionof a [ ρ, σ ]-domination problem. One set of states that we use involves the states ρ j and σ j ,where ρ j and σ j represent vertices not in D , or in D , that have j neighbours in D , respectively.For finite ρ , σ , we let p = max { ρ } and q = max { σ } . In this case, we have the following setof states: { ρ , ρ , . . . , ρ p , σ , σ , . . . , σ q } . If ρ or σ are cofinite, we let p = 1 + max { N \ ρ } and q = 1 + max { N \ σ } . In this case, we replace the last state in the given sets by ρ ≥ p or ρ ≥ q ,respectively. This state represents a vertex in the vertex set D of the partial solution of the[ ρ, σ ]-domination problem that has at least p neighbours in D , or a vertex not in D with atleast q neighbours in D , respectively. Let s = p + q + 2 be the number of states involved.Dynamic programming tables for the [ ρ, σ ]-domination problems can also be representedusing different sets of states that contain the same information. In this section, we will usethree different sets of states. These sets are defined as follows. efinition 3.10 Let State Set I, II, and III be the following sets of states: • State Set I: { ρ , ρ , ρ , . . . , ρ p − , ρ p /ρ ≥ p , σ , σ , σ , . . . , σ q − , σ q /σ ≥ q } . • StateSet II: { ρ , ρ ≤ , ρ ≤ , . . . , ρ ≤ p − , ρ ≤ p /ρ N , σ , σ ≤ , σ ≤ , . . . , σ ≤ q − , σ ≤ q /σ N } . • State Set III: { ρ , ρ , ρ , . . . ρ p − , ρ p /ρ ≥ p − , σ , σ , σ , . . . , σ q − , σ q /σ ≥ q − } . The meaning of all the states is self-explanatory: ρ condition and σ condition consider the numberof partial solutions of the [ ρ, σ ]-domination problem that do not contain ( ρ -state) or do contain( σ -state) this vertex with a number of neighbours in the corresponding vertex sets satisfyingthe condition . The subscript N stands for no condition at all, i.e., ρ N = ρ ≥ : all possiblenumber of neighbours in N . We note that the notation ρ p /ρ ≥ p in Definition 3.10 is used toindicate that this set uses the state ρ p if ρ is finite and ρ ≥ p if ρ is cofinite. Lemma 3.11
Let x be a node of a tree decomposition T and let A x be a table with entries A x ( c, κ ) representing the number of partial solutions of size κ to the [ ρ, σ ] -domination problemin G x corresponding to each colouring c of the bag X x with states from any of the threesets from Definition 3.10. The information represented in the table A x does not depend onthe choice of the set of states from the options given in Definition 3.10. Moreover, thereexist transformations between tables using representations with different sets of states using O ( s | X x || A x | ) arithmetic operations. Proof:
We apply transformations that work in | X x | steps and are similar to those in theproofs of Lemmas 3.5 and 3.8. In the i -th step, we replace the states at the i -th coordinateof c . We use the following formulas to create entries with a new state.We will give only the formulas for the ρ -states. The formulas for the σ -states are identical,but with ρ replaced by σ and p replaced by q . We note that we slightly abuse notation belowsince we use that ρ ≤ = ρ .To obtain states from State Set I not present in State Set II or III, we can use: A x ( c × { ρ j } × c , κ ) = A x ( c × { ρ ≤ j } × c , κ ) − A x ( c × { ρ ≤ j − } × c , κ ) A x ( c × { ρ ≥ p } × c , κ ) = A x ( c × { ρ N } × c , κ ) − A x ( c × { ρ ≤ p − } × c , κ ) A x ( c × { ρ ≥ p } × c , κ ) = A x ( c × { ρ ≥ p − } × c , κ ) − A x ( c × { ρ p − } × c , κ )To obtain states from State Set II not present in State Set I or III, we can use: A x ( c × { ρ ≤ j } × c , κ ) = j X l =0 A x ( c × { ρ l } × c , κ ) A x ( c × { ρ N } × c , κ ) = A x ( c × { ρ ≥ p } × c , κ ) + p − X l =0 A x ( c × { ρ l } × c , κ ) A x ( c × { ρ N } × c , κ ) = A x ( c × { ρ ≥ p − } × c , κ ) + p − X l =0 A x ( c × { ρ l } × c , κ )To obtain states from State Set III not present in State Set I or II, we can use the sameformulas used to obtain states from State Set I in combination with the following formulas: A x ( c × { ρ ≥ p − } × c , κ ) = A x ( c × { ρ ≥ p } × c , κ ) + A x ( c × { ρ p − } × c , κ ) A x ( c × { ρ ≥ p − } × c , κ ) = A x ( c × { ρ N } × c , κ ) − A x ( c × { ρ ≤ p − } × c , κ )As the transformations use | X x | steps in which each entry is computed by evaluating asum of less than s terms, the transformations require O ( | X x || A x | ) arithmetic operations. (cid:3) We note that similar transformations can also be used to transform a table into a newtable that uses different sets of states on different vertices in a bag X x . For example, we canuse State Set I on the first two vertices (assuming some ordering) and State Set III on theother | X x |− ρ, σ ]-domination problems, we will also need more in-volved state transformations than those given above. We need to generalise the ideas of theproof of Theorem 3.9. In this proof, we expanded the tables A l and A r of the two child nodes l and r such that they contain entries A l ( c, i ) and A r ( c, i ), where i was an index indicating the umber of 1-states used to create the ?-states in c . We will generalise this to the states usedfor the [ ρ, σ ]-domination problems.Below, we often say that a colouring c of a bag X x using State Set I from Definition 3.10is counted in a colouring c ′ of X x using State Set II. We let this be the case when, all partialsolutions counted in the entry with colouring c in a table using State Set I are also counted inthe entry with colouring c ′ in the same table when transformed such that it uses State Set II.I.e., when, for each vertex v ∈ X x , c ( v ) and c ′ ( v ) are both σ -states or both ρ -states, and if c ( v ) = ρ i or c ( v ) = σ i , then c ′ ( v ) = ρ ≤ j or c ′ ( v ) = σ ≤ j for some j ≥ i .Consider the case where ρ and σ are finite. We introduce an index vector ~i = ( i ρ , i ρ , . . . , i ρp , i σ , i σ , . . . , i σq )that is used in combination with states from State Set II from Definition 3.10. In this indexvector, i ρj and i σj represent the sum over all vertices with state ρ ≤ j and σ ≤ j of the numberof neighbours of the vertex in D , respectively. We say that a solution corresponding to acolouring c using State Set I from Definition 3.10 satisfies a combination of a colouring c ′ using State Set II and an index vector ~i if: c is counted in c ′ , and for each i ρj or i σj , the sumover all vertices with state ρ ≤ j and σ ≤ j in c ′ of the number of neighbours of the vertex in D equals i ρj or i σj , respectively.We clarify this with an example. Suppose that we have a bag of size three and a dynamicprogramming table indexed by colourings using the set of states { ρ , ρ , ρ , σ } (State Set I)that we want to transform to one using the set states { ρ , ρ ≤ , ρ ≤ , σ } (State Set II): thus ~i = ( i ρ , i ρ ). Notice that a partial solution corresponding to the colouring c = ( ρ , ρ , ρ )will be counted in both c ′ = ( ρ , ρ ≤ , ρ ≤ ) and c ′ = ( ρ ≤ , ρ ≤ , ρ ≤ ). In this case, c satisfiesthe combination ( c ′ ,~i = (0 , c of the verticeswith state ρ ≤ in c ′ equals zero and this sum for the vertices with state ρ ≤ in c ′ equalsthree. Also, c satisfies no combination of c ′ with an other index vector. Similarly, c satisfiesthe combination ( c ′ ,~i = (1 , c ′ .In the case where ρ or σ are cofinite, the index vectors are one shorter: we do not countthe sum of the number of neighbours in D of the vertices with state ρ N and σ N .What we will need is a table containing, for each possible combination of a colouring usingState Set II with an index vector, the number of partial solutions that satisfy these. We canconstruct such a table using the following lemma. Lemma 3.12
Let x be a node of a tree decomposition T of width k . There exists an algorithmthat, given a table A x with entries A x ( c, κ ) containing the number of partial solutions of size κ to the [ ρ, σ ] -domination problem corresponding to the colouring c on the bag X x using StateSet I from Definition 3.10, computes in O ( n ( sk ) s − s k +1 i + ( n )) time a table A ′ x with entries A ′ x ( c, κ,~i ) containing the number partial solutions of size κ to the [ ρ, σ ] -domination problemsatisfying the combination of a colouring using State Set II and the index vector ~i . Proof:
We start with the following table A ′ x using State Set I: A ′ x ( c, κ,~i ) = (cid:26) A x ( c, κ ) if ~i is the all-0 vector0 otherwiseSince there are no colourings with states ρ ≤ j and σ ≤ j yet, the sum of the number of neighboursin the vertex set D of the partial solutions of vertices with these states is zero.Next, we change the states of the j -th coordinate at step j similar to Lemma 3.11, butnow we also updates the index vector ~i : A ′ x ( c × { ρ ≤ j } × c , κ,~i ) = j X l =0 A ′ x ( c × { ρ l } × c , κ,~i i ρj → ( i ρj − l ) ) A ′ x ( c × { σ ≤ j } × c , κ,~i ) = j X l =0 A ′ x ( c × { σ l } × c , κ,~i i σj → ( i σj − l ) )Here, ~i i ρj → ( i ρj − l ) denotes the index vector ~i with the value of i ρj set to i ρj − l .If ρ or σ are cofinite, we simply use the formula in Lemma 3.11 for every fixed indexvector ~i for the ρ N -states and σ N -states. We do so because we do not need to keep track ofany index vectors for these states.For the running time, note that each index i ρj , i σj can have only values between zeroand sk because there can be at most k vertices in X x that each have at most s neighboursin D when considered for a state of the form ρ ≤ j or σ ≤ j , as j < p or j < q , respectively. he new table has O ( n ( sk ) s − s k +1 ) entries since we have s k +1 colourings, n + 1 sizes κ , and s − sk values. Since the algorithm uses at most k + 1 steps in whichit computes a sum with less than s terms for each entry using n -bit numbers, this gives arunning time of O ( n ( sk ) s − s k +1 i + ( n )). (cid:3) We are now ready to prove our main result of this section.
Theorem 3.13
Let ρ, σ ⊆ N be finite or cofinite, and let p , q and s be the values associatedwith the corresponding [ ρ, σ ] -domination problem. There is an algorithm that, given a treedecomposition of a graph G of width k , computes the number of [ ρ, σ ] -dominating sets in G ofeach size κ , ≤ κ ≤ n , in O ( n ( sk ) s − s k +1 i × ( n )) time. Notice that, for any given [ ρ, σ ]-domination problem, s is a fixed constant. Hence, Theo-rem 3.13 gives us O ∗ ( s k )-time algorithms for these problems. Proof:
Before we give the computations involved for each type of node in a nice tree de-composition T , we slightly change the meaning of the subscript of the states ρ condition and σ condition . In our algorithm, we let the subscripts of these states count only the number ofneighbours in the vertex sets D of the partial solution of the [ ρ, σ ]-domination problem thathave already been forgotten by the algorithm. This prevents us from having to keep track ofany adjacencies within a bag during a join operation. We will update these subscripts in theforget nodes. This modification is similar to the approach for counting perfect matchings inthe proof of Theorem 3.9, where we matched vertices in a forget node to make sure that wedid not have to deal with vertices that are matched within a bag when computing the tablefor a join node.We will now give the computations for each type of node in a nice tree decomposition T .For each node x ∈ T , we will compute a table A x ( c, κ ) containing the number of partialsolutions of size κ in G x corresponding to the colouring c on X x for all colourings c usingState Set I from Definition 3.10 and all 0 ≤ κ ≤ n . During this computation, we will transformto different sets of states using Lemmas 3.11 and 3.12 when necessary. Leaf node : Let x be a leaf node in T .Because the subscripts of the states count only neighbours in the vertex set of the par-tial solutions that have already been forgotten, we use only the states ρ and σ on a leaf.Furthermore, the number of σ -states must equal κ . As a result, we can compute A x in thefollowing way: A x ( c, κ ) = c = { ρ } and κ = 01 if c = { σ } and κ = 10 otherwise Introduce node : Let x be an introduce node in T with child node y introducing the vertex v .Again, the entries where v has the states ρ j or σ j , for j ≥
1, will be zero due to thedefinition of the (subscripts of) the states. Also, we must again keep track of the size κ . Let ς be the state of the introduced vertex. We compute A x in the following way: A x ( c × { ς } , κ ) = A y ( c, κ ) if ς = ρ A y ( c, κ −
1) if ς = σ and κ ≥
10 otherwise
Forget node : Let x be a forget node in T with child node y forgetting the vertex v .The operations performed in the forget node are quite complicated. Here, we must updatethe states such that they are correct after forgetting the vertex v , and we must select thosesolutions that satisfy the constraints imposed on v by the specific [ ρ, σ ]-domination problem.We will do this in three steps: we compute intermediate tables A , A in the first two stepsand finally A x in step three. Let c ( N ( v )) be the subcolouring of c restricted to vertices in N ( v ). Step 1 : We update the states used on the vertex v . We do so to include the neighboursin D that the vertex v has inside the bag X x in the states used to represent the differentcharacteristics. Notice that after including these neighbours, the subscripts of the states on v represent the total number of neighbours that v has in D . The result will be the table A , hich we compute using the following formulas where σ ( c ) stands for the number of σ -statesin the colouring c : A ( c × { ρ j } , κ ) = (cid:26) A y ( c × { ρ j − σ ( c ( N ( v ))) } , κ ) if j ≥ σ ( c ( N ( v )))0 otherwise A ( c × { σ j } , κ ) = (cid:26) A y ( c × { σ j − σ ( c ( N ( v ))) } , κ ) if j ≥ σ ( c ( N ( v )))0 otherwiseIf ρ or σ are cofinite, we also need the following formulas: A ( c × { ρ ≥ p } , κ ) = A y ( c × { ρ ≥ p } , κ ) + p − X i = p − σ ( c ( N ( v ))) A y ( c × { ρ i } , κ ) A ( c × { σ ≥ q } , κ ) = A y ( c × { σ ≥ q } , κ ) + q − X i = q − σ ( c ( N ( v ))) A y ( c × { σ i } , κ )Correctness of these formulas is easy to verify. Step 2 : We update the states representing the neighbours of v such that they are accordingto their definitions after forgetting v . All the required information to do this can again beread from the colouring c .We apply Lemma 3.11 and change the state representation for the vertices in N ( v ) to StateSet III (Definition 3.10) obtaining the table A ′ ( c, κ ); we do not change the representation ofother vertices in the bag. That is, if ρ or σ are cofinite, we replace the last state ρ ≥ p or σ ≥ q by ρ ≥ p − or σ ≥ q − , respectively, on vertices in X y ∩ N ( v ). We can do so as discussed belowthe proof of 3.11.This state change allows us to extract the required values for the table A , as we will shownext. We introduce the function φ that will send a colouring using State Set I to a colouringthat uses State Set I on the vertices in X y \ N ( v ) and State Set III on the vertices in X y ∩ N ( v ).This function updates the states used on N ( v ) assuming that we would put v in the vertexset D of the partial solution. We define φ in the following way: it maps a colouring c to anew colouring with the same states on vertices in X y \ N ( v ) while it applies the followingreplacement rules on the states on vertices in X y ∩ N ( v ): ρ ρ , ρ ρ , . . . , ρ p ρ p − , ρ ≥ p ρ ≥ p − , σ σ , σ σ , . . . , σ q σ q − , σ ≥ q σ ≥ q − . Thus, φ lowers the countersin the conditions that index the states by one for states representing vertices in N ( v ). Wenote that φ ( c ) is defined only if ρ , σ c .Using this function, we can easily update our states as required: A ( c × { σ j } , κ ) = (cid:26) A ′ ( φ ( c ) × { σ j } , κ ) if ρ , σ c ( N ( v ))0 otherwise A ( c × { ρ j } , κ ) = A ′ ( c × { ρ j } , κ )In words, for partial solutions on which the vertex v that we will forget has a σ -state, weupdate the states for vertices in X y ∩ N ( v ) such that the vertex v is counted in the subscriptof the states. Entries in A are set to 0 if the states count no neighbours in D while v hasa σ -state in c and thus a neighbour in D in this partial solution.Notice that after updating the states using the above formula the colourings c in A againuses State Set I from Definition 3.10. Step 3 : We select the solutions that satisfy the constraints of the specific [ ρ, σ ]-dominationproblem on v and forget v . A x ( c, κ ) = X i ∈ ρ A ( c × { ρ i } , κ ) ! + X i ∈ σ A ( c × { σ i } , κ ) ! We slightly abuse our notation here when ρ or σ are cofinite. Following the discussion of theconstruction of the table A x , we conclude that this correctly computes the required values. Join node : Let x be a join node in T and let l and r be its child nodes. Computing thetable A x for the join node x is the most interesting operation.First, we transform the tables A l and A r of the child nodes such that they use State Set II(Definition 3.10) and are indexed by index vectors using Lemma 3.12. As a result, we obtaintables A ′ l and A ′ r with entries A ′ l ( c, κ, ~g ) and A ′ r ( c, κ,~h ). These entries count the number ofpartial solutions of size κ corresponding to the colouring c such that the sum of the number of eighbours in D of the set of vertices with each state equals the value that the index vectors ~g and ~h indicate. Here, D is again the vertex set of the partial solution involved. See theexample above the statement of Lemma 3.12.Then, we compute the table A x ( c, κ,~i ) by combining identical states from A ′ l and A ′ r usingthe formula below. In this formula, we sum over all ways of obtaining a partial solution ofsize κ by combining the sizes in the tables of the child nodes and all ways of obtaining indexvector ~i from ~i = ~g + ~h . A ′ x ( c, κ,~i ) = X κ l + κ r = κ + σ ( c ) X i ρ = g ρ + h ρ · · · X i σq = g σq + h σq A ′ l ( c, κ l , ~g ) · A ′ r ( c, κ r ,~h ) We observe the following: a partial solution D in A ′ x that is a combination of partialsolutions from A ′ l and A ′ r is counted in an entry in A ′ x ( c, κ,~i ) if and only if it satisfies thefollowing three conditions.1. The sum over all vertices with state ρ ≤ j and σ ≤ j of the number of neighbours of thevertex in D of this combined partial solution equals i ρj or i σj , respectively.2. The number of neighbours in D of each vertex with state ρ ≤ j or σ ≤ j of both partialsolutions used to create this combined solution is at most j .3. The total number of vertices in D in this joined solution is κ .Let Σ lρ ( c ), Σ lσ ( c ) be the weighted sums of the number of ρ j -states and σ j -states with0 ≤ j ≤ l in c , respectively, defined by:Σ lρ ( c ) = l X j =1 j · ρ j ( c ) Σ lσ ( c ) = l X j =1 j · σ j ( c )We note that Σ ρ ( c ) = ρ ( c ) and Σ σ ( c ) = σ ( c ).Now, using Lemma 3.11, we change the states used in the table A ′ x back to State Set I.If ρ and σ are finite, we extract the values computed for the final table A x in the followingway: A x ( c, κ ) = A ′ x (cid:0) c, κ, (Σ ρ ( c ) , Σ ρ ( c ) , . . . , Σ pρ ( c ) , Σ σ ( c ) , Σ σ ( c ) , . . . , Σ qσ ( c )) (cid:1) If ρ or σ are cofinite, we use the same formula but omit the components Σ pρ ( c ) or Σ qσ ( c ) fromthe index vector of the extracted entries, respectively.Below, we will prove that the entries in A x are exactly the values that we want to compute.We first give some intuition. In essence, the proof is a generalisation of how we performedthe join operation for counting the number of perfect matchings in the proof of Theorem 3.9.State Set II has the role of the ?-states in the proof of Theorem 3.9. These states are used tocount possible combinations of partial solutions from A l and A r . These combinations includeincorrect combinations in the sense that a vertex can have more neighbours in D than it shouldhave; this is analogous to counting the number of perfect matchings, where combinations wereincorrect if a vertex is matched twice. The values Σ lρ ( c ) and Σ lσ ( c ) represent the total numberof neighbours in D of the vertices with a ρ j -states or σ j -states with 0 ≤ j ≤ l in c , respectively.The above formula uses these Σ lρ ( c ) and Σ lσ ( c ) to extract exactly those values from the table A ′ x that correspond to correct combinations. That is, in this case, correct combinations forwhich the number of neighbours of a vertex in D is also correctly represented by the newstates.We will now prove that the computation of the entries in A x gives the correct values.An entry in A x ( c, κ ) with c ∈ { ρ , σ } k is correct: these states are unaffected by the statechanges and the index vector is not used. The values of these entries follow from combinationsof partial solutions from both child nodes corresponding to the same states on the vertices.Now consider an entry in A x ( c, κ ) with c ∈ { ρ , ρ , σ } k . Each ρ -state comes from a ρ ≤ -state in A ′ x ( c, κ,~i ) and is a combination of partial solutions from A l and A r with the followingcombinations of states on this vertex: ( ρ , ρ ), ( ρ , ρ ), ( ρ , ρ ), ( ρ , ρ ). Because we havechanged states back to State Set I, each ( ρ , ρ ) combination is counted in the ρ -state onthis vertex, and thus subtracted from the combinations used to form state ρ : the other threecombinations remain counted in the ρ -state. Since we consider only those solutions with ndex vector i ρ = Σ ρ ( c ), the total number of ρ -states used to form this joined solutionequals Σ ρ ( c ) = ρ ( c ). Therefore, no ( ρ , ρ ) combination could have been used, and eachpartial solution counted in A ( c, κ ) has exactly one neighbour in D on each of the ρ -states,as required.We can now inductively repeat this argument for the other states. For c ∈ { ρ , ρ , ρ , σ } k ,we know that the entries with only ρ -states and ρ -states are correct. Thus, when a ρ -state isformed from a ρ ≤ -state during the state transformation of Lemma 3.11, all nine possibilitiesof getting the state ρ ≤ from the states ρ , ρ , and ρ in the child bags are counted, andfrom this number all three combinations that should lead to a ρ and ρ in the join aresubtracted. What remains are the combinations ( ρ , ρ ), ( ρ , ρ ), ( ρ , ρ ), ( ρ , ρ ), ( ρ , ρ ),( ρ , ρ ). Because of the index vector of the specific the entry we extracted from A ′ x , thetotal sum of the number of neighbours in D of these vertices equals Σ ρ , and hence only thecombinations ( ρ , ρ ), ( ρ , ρ ), and ( ρ , ρ ) could have been used. Any other combinationwould raise the component i ρ of ~i to a number larger than Σ ρ .If we repeat this argument for all states involved, we conclude that the above computationcorrectly computes A x if ρ and σ are finite. If ρ or σ are cofinite, then the argument canalso be used with one small difference. Namely, the index vectors are one component shorterand keep no index for the states ρ N and σ N . That is, at the point in the algorithm wherewe introduce these index vectors and transform to State Set II using Lemma 3.12, we haveno index corresponding to the sum of the number of neighbours in the vertex set D of thepartial solution of the vertices with states ρ N and σ N . However, we do not need to selectentries corresponding to having p or q neighbours in D for the states ρ ≥ p and σ ≥ q since thesecorrespond to all possibilities of getting at least p or q neighbours in D . When we transformthe states back to State Set I just before extracting the values for A x from A ′ x , entries thathave the state ρ ≥ p or σ ≥ q after the transformation count all possible combinations of partialsolutions except those counted in any of the other states. This is exactly what we need sinceall combinations with less than p (or q ) neighbours are present in the other states.After traversing the whole decomposition tree T , one can find the number of [ ρ, σ ]-dominating sets of size κ in the table computed for the root node z of T in A z ( ∅ , κ ).We conclude with an analysis of the running time. The most time-consuming compu-tations are again those involved in computing the table A x for a join node x . Here, weneed O ( n ( sk ) s − s k +1 i + ( n )) time for the transformations of Lemma 3.12 that introduce theindex vectors since max {| X x | | x ∈ T } = k + 1. However, this is still dominated by thetime required to compute the table A ′ x : this table contains at most s k +1 n ( sk ) s − entries A ′ x ( c, κ,~i ), each of which is computed by an n ( sk ) s − -term sum. This gives a total time of O ( n ( sk ) s − s k +1 i × ( n )) since we use n -bit numbers. Because the nice tree decompositionhas O ( n ) nodes, we conclude that the algorithm runs in O ( n ( sk ) s − s k +1 i × ( n )) time intotal. (cid:3) This proof generalises ideas from the fast subset convolution algorithm [6]. While convo-lutions use ranked M¨obius transforms [6], we use transformations with multiple states andmultiple ranks in our index vectors.The polynomial factors in the proof of Theorem 3.13 can be improved in several ways.Some improvements we give are for [ ρ, σ ]-domination problems in general, and others applyonly to specific problems. Similar to s = p + q + 2, we define the value r associated with a[ ρ, σ ]-domination problems as follows: r = max { p − , q − } if ρ and σ are cofinitemax { p, q − } if ρ is finite and σ is cofinitemax { p − , q } if ρ is confinite and σ is finitemax { p, q } if ρ and σ are finite Corollary 3.14 (General [ ρ, σ ]-Domination Problems)
Let ρ, σ ⊆ N be finite or cofi-nite, and let p , q , r , and s be the values associated with the corresponding [ ρ, σ ] -dominationproblem. There is an algorithm that, given a tree decomposition of a graph G of width k , com-putes the number of [ ρ, σ ] -dominating sets in G of each size κ , ≤ κ ≤ n , in O ( n ( rk ) r s k +1 i × ( n )) time. Moreover, there is an algorithm that decides whether there exist a [ ρ, σ ] -dominating setof size κ , for each individual value of κ , ≤ κ ≤ n , in O ( n ( rk ) r s k +1 i × ( log ( n ) + k log( r ))) time. roof: We improve the polynomial factor ( sk ) s − to ( rk ) r by making the following ob-servation. We never combine partial solutions corresponding to a ρ -state in one child nodewith a partial solution corresponding to a σ -state on the same vertex in the other child node.Therefore, we can combine the components of the index vector related to the states ρ j and σ j for each fixed j in a single index. For example consider the ρ -states and σ -states. For thesestates, this means the following: if we index the number of vertices used to create a ρ -stateand σ -state in i and we have i vertices on which a partial solution is formed by consider-ing the combinations ( ρ , ρ ), ( ρ , ρ ), ( ρ , ρ ), ( σ , σ ), ( σ , σ ), or ( σ , σ ), then non of thecombinations ( ρ , ρ ) and ( σ , σ ) could have been used. Since the new components of theindex vector range between 0 and rk , this proves the first running time in the statement ofthe corollary.The second running time follows from reasoning similar to that in Corollary 3.6. In thiscase, we can stop counting the number of partial solutions of each size and instead keeptrack of the existence of a partial solution of each size. The state transformations thencount the number of 1-entries in the initial tables instead of the number of solutions. Aftercomputing the table for a join node, we have to reset all entries e of A x to min { , e } . For thesecomputations, we can use O (log( n ) + k log( r ))-bit numbers. This is because of the followingreasoning. For a fixed colouring c using State Set II, each of the at most r k +1 colourings usingState Set I that can be counted in c occur with at most one index vector in the tables A ′ l and A ′ r . Note that these are r k +1 colourings, not s k +1 colourings, since ρ -states are nevercounted in a colouring c where the vertex has a σ -state and vice versa. Therefore, the resultof the large summation over all index vectors ~g and ~h with ~i = ~g + ~h can be bounded fromabove by ( r k ) . Since we sum over n possible combinations of sizes, the maximum is nr k allowing us to use O (log( n ) + k log( r ))-bit numbers. (cid:3) As a result, we can, for example, compute the size of a minimum-cardinality perfect codein O ( n k k i × (log( n ))) time. Note that the time bound follows because the problem is fixedand we use a computational model with O ( k )-bit word size. Corollary 3.15 ([ ρ, σ ]-Optimisation Problems with the de Fluiter Property)
Let ρ, σ ⊆ N be finite or cofinite, and let p , q , r , and s be the values associated with the correspond-ing [ ρ, σ ] -domination problem. If the standard representation using State Set I of the min-imisation (or maximisation) variant of this [ ρ, σ ] -domination problem has the de Fluiterproperty for treewidth with function f , then there is an algorithm that, given a tree de-composition of a graph G of width k , computes the number of minimum (or maximum) [ ρ, σ ] -dominating sets in G in O ( n ( f ( k )) ( rk ) r s k +1 i × ( n )) time. Moreover, there is an al-gorithm that computes the minimum (or maximum) size of such a [ ρ, σ ] -dominating set in O ( n ( f ( k )) ( rk ) r s k +1 i × ( log ( n ) + k log( r ))) time. Proof:
The difference with the proof of Corollary 3.14 is that, similar to the proof of Corol-lary 3.4, we can keep track of the minimum or maximum size of a partial solution in eachnode of the tree decomposition and consider only other partial solutions whose size differsat most f ( k ) of this minimum or maximum size. As a result, both factors n (the factor n due to the size of the tables, and the factor n due to the summation over the sizes of partialsolutions) are replaced by a factor f ( k ). (cid:3) As an application of Corollary 3.15, it follows for example that canbe solved in O ( nk k i × (log( n ))) time. Corollary 3.16 ([ ρ, σ ]-Decision Problems)
Let ρ, σ ⊆ N be finite or cofinite, and let p , q , r , and s be the values associated with the corresponding [ ρ, σ ] -domination problem. Thereis an algorithm that, given a tree decomposition of a graph G of width k , counts the numberof [ ρ, σ ] -dominating sets in G in O ( n ( rk ) r s k +1 i × ( n )) time. Moreover, there is an algorithmthat decides whether there exists a [ ρ,σ ] -dominating set in O ( n ( rk ) r s k +1 i × ( log ( n )+ k log( r ))) time. Proof:
This result follows similarly as Corollary 3.15. In this case, we can omit the sizeparameter from our tables, and we can remove the sum over the sizes in the computation ofentries of A ′ x completely. (cid:3) As an application of Corollary 3.16, it follows for example that we can compute the numberof strong stable sets (distance-2 independent sets) in O ( nk k i × ( n )) time. × γ -clique problems: the left table corresponds to partitioning and packingproblems and the right table corresponds to covering problems. The final class of problems that we consider for our tree decomposition-based-algorithms arethe clique covering, packing, and partitioning problems. To give a general result, we definedthe γ -clique covering, γ -clique packing, and γ -clique partitioning problems in Section 2.1; seeDefinition 2.2. For these γ -clique problems, we obtain O ∗ (2 k ) algorithms.Although any natural problem seems to satisfy this restriction, we remind the reader thatwe restrict ourselves to polynomial-time decidable γ , that is, given an integer j , we can decidein time polynomial in j whether j ∈ γ or not. This allows us to precompute γ ∩{ , , . . . , k +1 } in time polynomial in k , after which we can decide in constant time whether a clique of size l is allowed to be used in an associated covering, packing, or partitioning.We start by giving algorithms for the γ -clique packing and partitioning problems. Theorem 3.17
Let γ ⊆ N \ { } be polynomial-time decidable. There is an algorithm that,given a tree decomposition of a graph G of width k , computes the number of γ -clique packingsor γ -clique partitionings of G using κ , ≤ κ ≤ n , cliques in O ( n k k i × ( nk + n log( n ))) time. Proof:
Before we start dynamic programming on the tree decomposition T , we first computethe set γ ∩ { , , . . . , k + 1 } .We use states 0 and 1 for the colourings c , where 1 means that a vertex is already in aclique in the partial solution, and 0 means that the vertex is not in a clique in the partialsolution. For each node x ∈ T , we compute a table A x with entries A x ( c, κ ) containing thenumber of γ -clique packings or partitionings of G x consisting of exactly κ cliques that satisfythe requirements defined by the colouring c ∈ { , } | X x | , for all 0 ≤ κ ≤ n .The algorithm uses the well-known property of tree decompositions that for every clique C in the graph G , there exists a node x ∈ T such that C is contained in the bag X x (a niceproof of this property can be found in [14]). As every vertex in G is forgotten in exactly oneforget node in T , we can implicitly assign a unique forget node x C to every clique C , namelythe first forget node that forgets a vertex from C . In this forget node x C , we will updatethe dynamic programming tables such that they take the choice of whether to pick C in asolution into account. Leaf node : Let x be a leaf node in T . We compute A x in the following way: A x ( { } , κ ) = (cid:26) κ = 00 otherwise A x ( { } , κ ) = 0Since we decide to take cliques in a partial solution only in the forget nodes, the only partialsolution we count in A x is the empty solution. Introduce node : Let x be an introduce node in T with child node y introducing the vertex v .Deciding whether to take a clique in a solution in the corresponding forget nodes makes theintroduce operation trivial since the introduced vertex must have state 0: A x ( c × { } , κ ) = 0 A x ( c × { } , κ ) = A y ( c, κ ) Join node : In contrast to previous algorithms, we will first present the computations in thejoin nodes. We do so because we will use this operation as a subroutine in the forget nodes.Let x be a join node in T and let l and r be its child nodes. For the γ -clique partitioningand packing problems, the join is very similar to the join in the algorithm for counting thenumber of perfect matchings (Theorem 3.9). This can be seen from the corresponding jointable; see Figure 3. The only difference is that we now also have the size parameter κ . Hence,for y ∈ { l, r } , we first create the tables A ′ y with entries A ′ y ( c, κ, i ), where i indexes the number f 1-states in c . Then, we transform the set of states used for these tables A ′ y from { , } to { , ? } using Lemma 3.8, and compute the table A ′ x , now with the extra size parameter κ ,using the following formula: A ′ x ( c, κ, i ) = X κ l + κ r = κ X i = i l + i r A ′ l ( c, κ l , i l ) · A ′ r ( c, κ r , i r )Finally, the states in A ′ x are transformed back to the set { , } , after which the entries of A x can be extracted that correspond to the correct number of 1-states in c . Because the approachdescribed above is a simple extension of the join operation in the proof of Theorem 3.9 whichwas also used in the proof of Theorem 3.13, we omit further details. Forget node : Let x be a forget node in T with child node y forgetting the vertex v . Here, wefirst update the table A y such that it takes into account the choice of taking any clique in X y that contains y in a solution or not.Let M be a table with all the (non-empty) cliques C in X y that contain the vertex v andsuch that | C | ∈ γ , i.e., M contains all the cliques that we need to consider before forgettingthe vertex v . We notice that the operation of updating A y such that it takes into account allpossible ways of choosing the cliques in M is identical to letting the new A y be the result ofthe join operation on A y and the following table A M : A M ( c ×{ } , κ ) = c form a clique with v in M and κ = 11 if c is the colouring with only 0-states and κ = 00 otherwise A M ( c ×{ } , κ ) = 0It is not hard to see that this updates A y as required since A M ( c, κ ) is non-zero only when aclique in M is used with size κ = 1, or if no clique is used and κ = 0.If we consider a partitioning problem, then A x ( c, κ ) = A y ( c × { } , κ ) since v must becontained in a clique. If we consider a packing problem, then A x ( c, κ ) = A y ( c ×{ } , κ )+ A y ( c ×{ } , κ ) since v can but does not need to be in a clique. Clearly, this correctly computes A y .After computing A z for the root node z of T , the number of γ -clique packings or parti-tionings of each size κ can be found in A z ( ∅ , κ ).For the running time, we first observe that there are at most O ( n k ) cliques in G since T has O ( n ) nodes that each contain at most k + 1 vertices. Hence, there are at most O (( n k ) n )ways to pick at most n cliques, and we can work with O ( nk + n log( n ))-bit numbers. As ajoin and a forget operation require O ( n k k ) arithmetical operations, the running time is O ( n k k i × ( nk + n log( n ))). (cid:3) For the γ -clique covering problems, the situation is different. We cannot count the numberof γ -clique covers of all possible sizes, as the size of such a cover can be arbitrarily large. Evenif we restrict ourselves to counting covers that contain each clique at most once, then we neednumbers with an exponential number of bits. To see this, notice that the number of cliquesin a graph of treewidth k is at most O ∗ (2 k ) since there are at most O (2 k ) different cliques ineach bag. Hence, there are at most 2 O ∗ (2 k ) different clique covers, and these can be countedusing only O ∗ (2 k )-bit numbers. Therefore, we will restrict ourselves to counting covers of sizeat most n because minimum covers will never be larger than n .A second difference is that, in a forget node, we now need to consider covering the forgottenvertex multiple times. This requires a slightly different approach. Theorem 3.18
Let γ ⊆ N \ { } be polynomial-time decidable. There is an algorithm that,given a tree decomposition of a graph G of width k , computes the size and number of minimum γ -clique covers of G in O ( n log( k )2 k i × ( nk + n log( n ))) time. Proof:
The dynamic programming algorithm for counting the number of minimum γ -cliquecovers is similar to the algorithm of Theorem 3.17. It uses the same tables A x for every x ∈ T with entries A x ( c, κ ) for all c ∈ { , } | X x | and 0 ≤ κ ≤ n . And, the computations of thesetables in a leaf or introduce node of T are the same. Join node : Let x be a join node in T and let l and r be its child nodes. The join operation isdifferent from the join operation in the algorithm of Theorem 3.17 as can be seen from Figure 3.Here, the join operation is similar to our method of handling the 0 -states and 0 -states for the ominating Set problem in the algorithm of Theorem 3.3. We simply transform the statesin A l and A r to { , ? } and compute A x using these same states by summing over identicalentries with different size parameters: A x ( c, κ ) = X κ l + κ r = κ A l ( c, κ l ) · A r ( c, κ r )Then, we obtain the required result by transforming A x back to using the set of states { , } .We omit further details because this works analogously to Theorem 3.3. The only differencewith before is that we use the value zero for any A l ( c, κ l ) or A r ( c, κ r ) with κ l , κ r < κ l , κ r > n as these never contribute to minimum clique covers. Forget node : Let x be a forget node in T with child node y forgetting the vertex v . In contrastto Theorem 3.17, we now have to consider covering v with multiple cliques. In a minimumcover, v can be covered at most k times because there are no vertices in X x left to cover afterusing k cliques from X x .Let A M be as in Theorem 3.17. What we need is a table that contains more than justall cliques that can be used to cover v : it needs to count all combinations of cliques thatwe can pick to cover v at most k times indexed by the number of cliques used. To createthis new table, we let A M = A M , and let A jM be the result of the join operation applied tothe table A j − M with itself. Then, the table A jM counts all ways of picking a series of 2 j sets C , C , . . . , C j , where each set is either the empty set or a clique from M . To see that thisholds, compare the definition of the join operation for this problem to the result of executingthese operations repeatedly. The algorithm computes A ⌈ log( k ) ⌉ M . Because we want to knowthe number of clique covers that we can choose, and not the number of series of 2 ⌈ log( k ) ⌉ sets C , C , . . . , C ⌈ log( k ) ⌉ , we have to compensate for the fact that most covers are countedmore than once. Clearly, each cover consisting of κ cliques corresponds to a series in which2 ⌈ log( k ) ⌉ − κ empty sets are picked: there are (cid:0) ⌈ log( k ) ⌉ κ (cid:1) possibilities of picking the empty setsand κ ! permutations of picking each of the κ cliques in any order. Hence, we divide each entry A M ( c, κ ) by κ ! (cid:0) ⌈ log( k ) ⌉ κ (cid:1) . Now, A ⌈ log( k ) ⌉ M contains the numbers we need for a join with A y .After performing the join operation with A y and A ⌈ log( k ) ⌉ M obtaining a new table A y , weselect the entries of A y that cover v : A x ( c, κ ) = A y ( c × { } , κ ).If we have computed A z for the root node z of T , the size of the minimum γ -clique coverequals the smallest κ for which A z ( ∅ , κ ) is non-zero, and this entry contains the number ofsuch sets.For the running time, we find that in order to compute A ⌈ log( k ) ⌉ M , we need O (log( k )) joinoperations. The running time then follows from the same analysis as in Theorem 3.17. (cid:3) Similar to previous results, we can improve the polynomial factors involved.
Corollary 3.19
Let γ ⊆ N \ { } be polynomial-time decidable. There are algorithms that,given a tree decomposition of a graph G of width k :1. decide whether there exists a γ -clique partition of G in O ( nk k ) time.2. count the number of γ -clique packings in G or the number of γ -clique partitionings in G in O ( nk k i × ( nk + n log( n ))) time.3. compute the size of a maximum γ -clique packing in G , maximum γ -clique partitioningin G , or minimum γ -clique partitioning in G of a problem with the de Fluiter propertyfor treewidth in O ( nk k ) time.4. compute the size of a minimum γ -clique cover in G of a problem with the de Fluiterproperty for treewidth in O ( nk log( k )2 k ) time, or in in O ( nk k ) time if | γ | is a con-stant.5. compute the number of maximum γ -clique packings in G , maximum γ -clique partition-ings in G , or minimum γ -clique partitionings in G of a problem with the de Fluiterproperty for treewidth in O ( nk k i × ( nk + n log( n ))) time.6. compute the number of minimum γ -clique covers in G of a problem with the de Fluiterproperty for treewidth in O ( nk log( k )2 k i × ( nk + n log( n ))) time, or in O ( nk k i × ( nk + n log( n ))) time if | γ | is a constant. roof: Proof (Sketch) Similar to before. Either use the de Fluiter property to replace afactor n by k , or omit the size parameter to completely remove this factor n if possible.Moreover, we can use O ( k )-bit numbers instead of O ( nk + n log( n ))-bit numbers if we are notcounting the number of solutions. In this case, we omit the time required for the arithmeticoperations because of the computational model that we use with O ( k )-bit word size. Forthe γ -clique cover problems where | γ | is a constant, we note that we can use A pM for someconstant p because, in a forget node, we need only a constant number of repetitions of thejoin operation on A M instead of log( k ) repetitions. (cid:3) By this result,
Partition Into Triangles can be solved in O ( nk k ) time. For thisproblem, Lokshtanov et al. proved that the given exponential factor in the running time isoptimal, unless the Strong Exponential-Time Hypothesis fails [55].We note that in Corollary 3.19 the problem of deciding whether there exists a γ -clique coveris omitted. This is because this problem can easily be solved without dynamic programmingon the tree decomposition by considering each vertex and testing whether it is contained ina clique whose size is a member of γ . This requires O ( nk k ) time in general, and polynomialtime if | γ | is a constant. Dynamic programming algorithms on branch decompositions work similar to those on treedecompositions. The tree is traversed in a bottom-up manner while computing tables A e withpartial solutions on G e for every edge e of T (see the definitions in Section 2.2.2). Again, thetable A e contains partial solutions of each possible characteristic , where two partial solutions P and P have the same characteristic if any extension of P to a solution on G also is anextension of P to a solution on G . After computing a table for every edge e ∈ E ( T ), we finda solution for the problem on G in the single entry of the table A { y,z } , where z is the root of T and y is its only child node. Because the size of the tables is often (at least) exponential in thebranchwidth k , such an algorithm typically runs in O ( f ( k ) poly ( n )) time, for some function f that grows at least exponentially. See Proposition 4.1 for an example algorithm.In this section, we improve the exponential part of the running time for many dynamicprogramming algorithms on branch decompositions. A difference to our results on tree de-compositions is that when the number of partial solutions stored in a table is O ∗ ( s k ), thenour algorithms will run in O ∗ ( s ω k ) time. This difference in the running time is due to thefact that the structure of a branch decomposition is different to the structure of a tree decom-position. A tree decomposition can be transformed into a nice tree decomposition, such thatevery join node x with children l , r has X x = X r = X l . But a branch decomposition does nothave such a property: here we need to consider combining partial solutions from both tablesof the child edges while forgetting and introducing new vertices at the same time.This section is organised as follows. We start by setting up the framework that we usefor dynamic programming on branch decompositions by giving a simple algorithm in Sec-tion 4.1. Hereafter, we give our results on Dominating Set in Section 4.2, our results oncounting perfect matchings in Section 4.3, and our results on the [ ρ, σ ]-domination problemsin Section 4.4.
We will first give a simple dynamic programming algorithm for the
Dominating Set problem.This algorithm follows from standard techniques on branchwidth-based algorithms and willbe improved later.
Proposition 4.1
There is an algorithm that, given a branch decomposition of a graph G of width k , counts the number of dominating sets in G of each size κ , ≤ κ ≤ n , in O ( mn k i × ( n )) time. Proof:
Let T be a branch decomposition of G rooted at a vertex z . For each edge e ∈ E ( T ),we will compute a table A e with entries A e ( c, κ ) for all c ∈ { , , } X e and all 0 ≤ κ ≤ n .Here, c is a colouring with states 1, 0 , and 0 that have the same meaning as in the tree-decomposition-based algorithms: see Table 3. In the table A e , an entry A e ( c, κ ) equals the umber of partial solutions of Dominating Set of size κ in G e that satisfy the requirementsdefined by the colouring c on the vertices in X e . That is, the number of vertex sets D ⊆ V e of size κ that dominate all vertices in V e except for those with state 0 in colouring c of X e ,and that contain all vertices in X e with state 1 in c .The described tables A e are computed by traversing the decomposition tree T in a bottom-up manner. A branch decompositions has only two kinds of edges for which we need tocompute such a table: leaf edges, and internal edges which have two child edges. Leaf edges : Let e be an edge of T incident to a leaf of T that is not the root. Then, G e = G [ X e ]is a two-vertex graph with X e = { u, v } . Note that { u, v } ∈ E .We compute A e in the following way: A e ( c, κ ) = κ = 2 and c = (1 , κ = 1 and either c = (1 , ) or c = (0 , κ = 0 and c = (0 , )0 otherwiseThe entries in this table are zero unless the colouring c represents one of the four possiblepartial solutions of Dominating Set on G e and the size of this solution is κ . In these non-zeroentries, the single partial solution represented by c is counted. Internal edges : Let e be an internal edge of T with child edges l and r . Recall the definitionof the sets I , L , R , F induced by X e , X l , and X r (Definition 2.7).Given a colouring c , let c ( I ) denote the colouring of the vertices of I induced by c . Wedefine c ( L ), c ( R ), and c ( F ) in the same way. Given a colouring c e of X e , a colouring c l of X l ,and a colouring c r of X r , we say that these colourings match if they correspond to a correctcombination of two partial solutions with the colourings c l and c r on X l and X r which resultis a partial solution that corresponds to the colouring c e on X e . For a vertex in each of thefour partitions I , L , R , and F of X e ∪ X l ∪ X r , this means something different: • For any v ∈ I : either c e ( v ) = c l ( v ) = c r ( v ) ∈ { , } , or c e ( v ) = 0 while c l ( v ) , c r ( v ) ∈{ , } and not c l ( v ) = c r ( v ) = 0 . (5 possibilities) • For any v ∈ F : either c l ( v ) = c r ( v ) = 1, or c l ( v ) , c r ( v ) ∈ { , } while not c l ( v ) = c r ( v ) = 0 . (4 possibilities) • For any v ∈ L : c e ( v ) = c l ( v ) ∈ { , , } . (3 possibilities) • For any v ∈ R : c e ( v ) = c r ( v ) ∈ { , , } . (3 possibilities)That is, for vertices in L or R , the properties defined by the colourings are copied from A l and A r to A e . For vertices in I , the properties defined by the colouring c e is a combinationof the properties defined by c l and c r in the same way as it is for tree decompositions (as inProposition 3.1). For vertices in F , the properties defined by the colourings are such thatthey form correct combinations in which the vertices may be forgotten, i.e., such a vertex is inthe vertex set of both partial solutions, or it is not in the vertex set of both partial solutionswhile it is dominated.Let κ = ( c r ( I ∪ F )) be the number of vertices that are assigned state 1 on I ∪ F inany matching triple c e , c l , c r . We can count the number of partial solutions on G e satisfyingthe requirements defined by each colouring c e on X e using the following formula: A e ( c e , κ ) = X c e ,c l ,c r match X κ l + κ r = κ + κ A l ( c l , κ l ) · A r ( c r , κ r )Notice that this formula correctly counts all possible partial solutions on G e per correspondingcolouring c e on X e by counting all valid combinations of partial solutions on G l correspondingto a colouring c l on X l and partial solutions G r corresponding to a colouring c r on X r .Let { y, z } be the edge incident to the root z of T . From the definition of A { y,z } , A { y,z } ( ∅ , κ )contains the number of dominating sets of size κ in G { y,z } = G .For the running time, we observe that we can compute A e in O ( n ) time for all leaf edges e of T . For the internal edges, we have to compute the O ( n k ) values of A e , each of whichrequires O ( n ) terms of the above sum per set of matchings states. Since each vertex in I has5 possible matching states, each vertex in F has 4 possible matching states, and each vertex n L or R has 3 possible matching states, we compute each A e in O ( n | I | | F | | L | + | R | i × ( n ))time.Under the constraint that | I | + | L | + | R | , | I | + | L | + | F | , | I | + | R | + | F | ≤ k , the runningtime is maximal if | I | = 0, | L | = | R | = | F | = k . As T has O ( m ) edges and we work with n -bit numbers, this leads to a running time of O ( mn k k i × ( n )) = O ( mn k i × ( n )). (cid:3) The above algorithm gives the framework that we use in all of our dynamic programmingalgorithms on branch decompositions. In later algorithms, we will specify only how to computethe tables A e for both kinds of edges. De Fluiter Propery for Branchwidth.
We conclude this subsection with a discus-sion on a de Fluiter property for branchwidth. We will see below that such a property forbranchwidth is identical to the de Fluiter property for treewidth.One could define a de Fluiter property for branchwidth by replacing the words treewidthand tree decomposition in Definition 3.2 by branchwidth and branch decomposition. However,the result would be a property equivalent to the de Fluiter property for treewidth. This is nothard to see, namely, consider any edge e of a branch decomposition with middle set X e . Arepresentation of the different characteristics on X e of partial solutions on G e used on branchdecompositions can also be used as a representation of the different characteristics on X x of partial solutions on G x on tree decompositions, if X e = X x and G e = G x . Clearly, anextension of a partial solution on G e with some characteristic is equivalent to an extensionof the same partial solution on G x = G e , and hence the representations can be used on bothdecompositions. This equivalence of both de Fluiter properties follows directly.As a result, we will use the de Fluiter property for treewidth in this section. In Section 5,we will also define a de Fluiter property for cliquewidth; this property will be different fromthe other two. We start by improving the above algorithm of Proposition 4.1. This improvement will bepresented in two steps. First, we use state changes similar to what we did in Section 3.2.Thereafter, we will further improve the result by using fast matrix multiplication in the sameway as proposed by Dorn in [32]. As a result, we will obtain an O ∗ (3 ω k ) algorithm for Dominating Set for graphs given with any branch decomposition of width k .Similar to tree decompositions, it is more efficient to transform the problem to one usingstates 1, 0 , and 0 ? if we want to combine partial solutions from different dynamic pro-gramming tables. However, there is a big difference between dynamic programming on treedecompositions and on branch decompositions. On tree decompositions, we can deal withforget vertices separately, while this is not possible on branch decompositions. This makesthe situation more complicated. On branch decompositions, vertices in F must be dealt withsimultaneously with the computation of A e from the two tables A l and A r for the child edges l and r of e . We will overcome this problem by using different sets of states simultaneously.The set of states used depends on whether a vertex is in L , R , I or F . Moreover, we do thisasymmetrically as different states can be used on the same vertices in a different table A e , A l , A r . This use of asymmetrical vertex states will later allow us to easily combine the useof state changes with fast matrix multiplication and obtain significant improvements in therunning time.We state this use of different states on different vertices formally. We note that thisconstruction has already been used in the proof of Theorem 3.13. Lemma 4.2
Let e be an edge of a branch decomposition T with corresponding middle set | X e | ,and let A e be a table with entries A e ( c, κ ) representing the number of partial solutions of Dominating Set in G e of each size κ , for some range of κ , corresponding to all colourings ofthe middle set X e with states such that for every individual vertex in X e one of the followingfixed sets of states is used: { , , } { , , ? } { , , ? } (see Table 3)The information represented in the table A e does not depend on the choice of the set ofstates from the options given above. Moreover, there exist transformations between tables sing representations with different sets of states on each vertex using O ( | X x || A x | ) arithmeticoperations. Proof:
Use the same | X e | -step transformation as in the proof of Lemma 3.5 with the differencethat we can choose a different formula to change the states at each coordinate of the colouring c of X e . At coordinate i of the colouring c , we use the formula that corresponds to the set ofstates that we want to use on the corresponding vertex in X e . (cid:3) We are now ready to give the first improvement of Proposition 4.1.
Proposition 4.3
There is an algorithm that, given a branch decomposition of a graph G of width k , counts the number of dominating sets in G of each size κ , ≤ κ ≤ n , in O ( mn k i × ( n )) time. Proof:
The algorithm is similar to the algorithm of Proposition 4.1, only we employ adifferent method to compute A e for an internal edge e of T . Internal edges : Let e be an internal edge of T with child edges l and r .We start by applying Lemma 4.2 to A l and A r and change the sets of states used for eachindividual vertex in the following way. We let A l use the set of states { , ? , } on verticesin I and the set of states { , , } on vertices in L and F . We let A r use the set of states { , ? , } on vertices in I , the set of states { , , } on vertices in R , and the set of states { , , ? } on vertices in F . Finally, we let A e use the set of states { , ? , } on vertices in I and the set of states { , , } on vertices in L and R . Notice that different colourings usethe same sets of states on the same vertices with the exception of the set of states used forvertices in F ; here, A l and A r use different sets of states.Now, three colourings c e , c l and c r match if: • For any v ∈ I : c e ( v ) = c l ( v ) = c r ( v ) ∈ { , ? , } . (3 possibilities) • For any v ∈ F : either c l ( v ) = c r ( v ) = 1, or c l ( v ) = 0 and c r ( v ) = 0 , or c l ( v ) = 0 and c r ( v ) = 0 ? . (3 possibilities) • For any v ∈ L : c e ( v ) = c l ( v ) ∈ { , , } . (3 possibilities) • For any v ∈ R : c e ( v ) = c r ( v ) ∈ { , , } . (3 possibilities)For the vertices on I , these matching combinations are the same as used on tree decompositionsin Theorem 3.3, namely the combinations with states from the set { , ? , } where all statesare the same. For the vertices on L and R , we do exactly the same as in the proof ofProposition 4.1.For the vertices in F , a more complicated method has to be used. Here, we can useonly combinations that make sure that these vertices will be dominated: combinations withvertices that are in vertex set of the partial solution, or combinations in which the verticesare not in this vertex set, but in which they will be dominated. Moreover, by using differentstates for A l and A r , every combination of partial solutions is counted exactly once. To seethis, consider each of the three combinations on F used in Proposition 4.1 with the set ofstates { , , } . The combination with c l ( v ) = 0 and c r ( v ) = 0 is counted using the samecombination, while the other two combinations ( c l ( v ) = 0 and c r ( v ) = 0 or c r ( v ) = 0 ) arecounted when combining 0 with 0 ? .In this way, we can compute the entries in the table A e using the following formula: A e ( c e , κ ) = X c e ,c l ,c r match X κ l + κ r = κ + κ A l ( c l , κ l ) · A r ( c r , κ r )Here, κ = ( c r ( I ∪ F )) again is the number of vertices that are assigned state 1 on I ∪ F in any matching triple c e , c l , c r .After having obtained A e in this way, we can transform the set of states used back to { , , } using Lemma 4.2.For the running time, we observe that the combination of the different sets of states thatwe are using allows us to evaluate the above formula in O ( n | I | + | L | + | R | + | F | i × ( n )) time. Aseach state transformation requires O ( n k i + ( n )) time, the improved algorithm has a runningtime of O ( mn | I | + | L | + | R | + | F | i × ( n )). Under the constraint that | I | + | L | + | R | , | I | + | L | + | F | , | I | + | R | + | F | ≤ k , the running time is maximal if | I | = 0, | L | = | R | = | F | = k . Thisgives a total running time of O ( mn k i × ( n )). (cid:3) e will now give our faster algorithm for counting the number of dominating sets of eachsize κ , 0 ≤ κ ≤ n on branch decompositions. This algorithm uses fast matrix multiplication tospeed up the algorithm of Proposition 4.3. This use of fast matrix multiplication in dynamicprogramming algorithms on branch decompositions was first proposed by Dorn in [32]. Theorem 4.4
There is an algorithm that, given a branch decomposition of a graph G ofwidth k , counts the number of dominating sets in G of each size κ , ≤ κ ≤ n , in O ( mn ω k i × ( n )) time. Proof:
Consider the algorithm of Proposition 4.3. We will make one modification to thisalgorithm. Namely, when computing the table A e for an internal edge e ∈ E ( T ), we will showhow to evaluate the formula for A e ( c, κ ) for a number of colourings c simultaneously usingfast matrix multiplication. We give the details below. Here, we assume that the states in thetables A l and A r are transformed such that the given formula for A e ( c, κ ) in Proposition 4.3applies.We do the following. First, we fix the two numbers κ and κ l , and we fix a colouring of I .Note that this is well-defined because all three tables use the same set of states for colourson I . Second, we construct a 3 | L | × | F | matrix M l where each row corresponds to a colouringof L and each column corresponds to a colouring of F where both colourings use the statesused by the corresponding vertices in A l . We let the entries of M l be the values of A l ( c l , κ l )for the c l corresponding to the colourings of L and F of the given row and column of M l ,and corresponding to the fixed colouring on I and the fixed number κ l . We also construct asimilar 3 | F | × | R | matrix M r with entries from A r such that its rows correspond to differentcolourings of F and its columns correspond to different colourings of R where both colouringsuse the states used by the corresponding vertices in A r . The entries of M r are the values of A r ( c r , κ − κ l − κ ) where c r corresponds to the colouring of R and F of the given row andcolumn of M r , and corresponding to the fixed colouring on I and the fixed numbers κ and κ l .Here, the value of κ = ( c r ( I ∪ F )) depends on the colouring c r in the same way as inProposition 4.3. Third, we permute the rows of M r such that column i of M l and row i of M r correspond to matching colourings on F .Now, we can evaluate the formula for A e for all entries corresponding to the fixed colouringon I and the fixed values of κ and κ l simultaneously by computing M e = M l · M r . Clearly, M e is a 3 | L | × | R | matrix where each row corresponds to a colouring of L and each columncorresponds to a colouring of R . If one works out the matrix product M l · M r , one can seethat each entry of M e contains the sum of the terms of the formula for A e ( c e , κ ) such thatthe colouring c e corresponds to the given row and column of M e and the given fixed colouringon I and such that κ l + κ r = κ + κ corresponding to the fixed κ and κ l . That is, each entryin M e equals the sum over all possible allowed matching combinations of the colouring on F for the fixed values of κ and κ l , where the κ r involved are adjusted such that the number of1-states used on F is taken into account.In this way, we can compute the function A e by repeating the above matrix-multiplication-based process for every colouring on I and every value of κ and κ l in the range from 0 to n .As a result, we can compute the function A e by a series of n | I | matrix multiplications.The time required to compute A e in this way depends on | I | , | L | , | R | and | F | . Underthe constraint that | I | + | L | + | F | , | I | + | R | + | F | , | I | + | L | + | R | ≤ k and using the matrix-multiplication algorithms for square and non-square matrices as described in Section 2.3, theworst case arises when | I | = 0 and | L | = | R | = | F | = k . In this case, we compute eachtable A e in O ( n (3 k ) ω i × ( n )) time. This gives a total running time of O ( mn ω k i × ( n )). (cid:3) Using the fact that
Dominating Set has the de Fluiter property for treewidth, and usingthe same tricks as in Corollaries 3.4 and 3.6, we also obtain the following results.
Corollary 4.5
There is an algorithm that, given a branch decomposition of a graph G ofwidth k , counts the number of minimum dominating sets in G in O ( mk ω k i × ( n )) time. Corollary 4.6
There is an algorithm that, given a branch decomposition of a graph G ofwidth k , computes the size of a minimum dominating set in G in O ( mk ω k i × ( k )) time. .3 Counting the Number of Perfect Matchings The next problem we consider is the problem of counting the number of perfect matchingsin a graph. To give a fast algorithm for this problem, we use both the ideas introduced inTheorem 3.9 to count the number of perfect matchings on graphs given with a tree decompo-sition and the idea of using fast matrix multiplication of Dorn [32] found in Theorem 4.4. Wenote that [32, 33] did not consider counting perfect matchings. The result will be an O ∗ (2 ω k )algorithm.From the algorithm to count the number of dominating sets of each given size in a graph ofbounded branchwidth, it is clear that vertices in I and F need special attention when develop-ing a dynamic programming algorithm over branch decompositions. This is no different whenwe consider counting the number of perfect matchings. For the vertices in I , we will use statechanges and an index similar to Theorem 3.9, but for the vertices in F we will require onlythat all these vertices are matched. In contrast to the approach on tree decompositions, wewill not take into account the fact that we can pick edges in the matching at the point in thealgorithm where we forget the first endpoint of the edge. We represent this choice directly inthe tables A e of the leaf edges e : this is possible because every edge of G is uniquely assignedto a leaf of T .Our algorithm will again be based on state changes, where we will again use different setsof states on vertices with different roles in the computation. Lemma 4.7
Let e be an edge of a branch decomposition T with corresponding middle set | X e | ,and let A e be a table with entries A e ( c, κ ) representing the number of matchings in H e match-ing all vertices in V e \ X e and corresponding to all colourings of the middle set X e with statessuch that for every individual vertex in X e one of the following fixed sets of states is used: { , } { , ? } { , ? } (meaning of the states as in Lemma 3.8)The information represented in the table A e does not depend on the choice of the set ofstates from the options given above. Moreover, there exist transformations between tablesusing representations with different sets of states on each vertex using O ( | X x || A x | ) arithmeticoperations. Proof:
The proof is identical to that of Lemma 4.2 while using the formulas from the proofof Lemma 3.8. (cid:3)
Now, we are ready to proof the result for counting perfect matchings.
Theorem 4.8
There is an algorithm that, given a branch decomposition of a graph G ofwidth k , counts the number of perfect matchings in G in O ( mk ω k i × ( k log( n ))) time. Proof:
Let T be a branch decomposition of G of branchwidth k rooted at a vertex z .For each edge e ∈ E ( T ), we will compute a table A e with entries A e ( c ) for all c ∈ { , } X e where the states have the same meaning as in Theorem 3.9. In this table, an entry A e ( c )equals the number of matchings in the graph H e matching all vertices in V e \ X e and satisfyingthe requirements defined by the colouring c on the vertices in X e . These entries do not countmatchings in G e but in its subgraph H e that has the same vertices as G e but contains onlythe edges of G e that are in the leaves below e in T . Leaf edges : Let e be an edge of T incident to a leaf of T that is not the root. Now, H e = G e = G [ X e ] is a two vertex graph with X e = { u, v } and with an edge between u and v .We compute A e in the following way: A e ( c ) = (cid:26) c = (1 ,
1) or c = (0 , H e . This is clearly correct. Internal edges : Let e be an internal edge of T with child edges l and r .Similar to the proof of Theorem 3.9, we start by indexing the tables A l and A r by thenumber of 1-states used for later use. However, we now count only the number of 1-states sed on vertices in I in the index. We compute indexed tables A ′ l and A ′ r with entries A ′ l ( c l , i l )and A ′ r ( c r , i r ) using the following formula with y ∈ { l, r } : A ′ y ( c y , i y ) = (cid:26) A y ( c y ) if ( c y ( I ))) = i y ( c y ( I )) is the number of 1-entries in the colouring c y on the vertices in I .Next, we apply state changes by using Lemma 4.7. In this case, we change the states usedfor the colourings in A ′ r and A ′ l such that they use the set of states { , } on L , R , and F , andthe set of states { , ? } on I . Notice that the number of 1-states used to create the ?-states isnow stored in the index i l of A ′ l ( c, i l ) and i r of A ′ r ( c, i r ).We say that three colourings c e of X e , c l of X l and c r of X r using these sets of states onthe different partitions of X e ∪ X l ∪ X r match if: • For any v ∈ I : c e ( v ) = c l ( v ) = c r ( v ) ∈ { , ? } . (2 possibilities) • For any v ∈ F : either c l ( v ) = 0 and c r ( v ) = 1, or c l ( v ) = 1 and c r ( v ) = 0. (2 possibili-ties) • For any v ∈ L : c e ( v ) = c l ( v ) ∈ { , } . (2 possibilities) • For any v ∈ R : c e ( v ) = c r ( v ) ∈ { , } . (2 possibilities)Now, we can compute the indexed table A ′ e for the edge e of T using the following formula: A ′ e ( c e , i e ) = X c e ,c l ,c r match X i l + i r = i e A ′ l ( c l , i l ) · A ′ r ( c r , i r )Notice that we can compute A ′ e efficiently by using a series of matrix multiplications in thesame way as done in the proof of Theorem 4.4. However, the index i should be treated slightlydifferently from the parameter κ in the proof of Theorem 4.4. After fixing a colouring on I and the two values of i e and i l , we still create the two matrices M l and M r . In M l each rowcorresponds to a colouring of L and each column corresponds to a colouring of F , and in M r each rows again corresponds to a colourings of F and each column corresponds to a colouringof R . The difference is that we fill M l with the corresponding entries A ′ l ( c l , i l ) and M r withthe corresponding entries A ′ r ( c r , i r ). That is, we do not adjust the value of i r for the selected A ′ r ( c r , i r ) depending on the states used on F . This is not necessary here since the index countsthe total number of 1-states hidden in the ?-states and no double counting can take place.This in contrast to the parameter κ in the proof of Theorem 4.4; this parameter counted thenumber of vertices in a solution, which we had to correct to avoid double counting of thevertices in F .After computing A ′ e in this way, we again change the states such that the set of states { , } is used on all vertices in the colourings used in A ′ e . We then extract the values of A ′ e inwhich no two 1-states hidden in a ?-state are combined to a new 1-state on a vertex in I . Wedo so using the indices in the same way as in Theorem 3.9 but with the counting restrictedto I : A e ( c ) = A ′ e ( c, ( c ( I )))After computing the A e for all e ∈ E ( T ), we can find the number of perfect matchings in G = G { y,z } in the single entry in A { y,z } where z is the root of T and y is its only child.Because the treewidth and branchwidth of a graph differ by at most a factor (seeProposition 2.6), we can conclude that the computations can be done using O ( k log( n ))-bitnumbers using the same reasoning as in the proof of Theorem 3.9. For the running time, weobserve that we can compute each A e using a series of k | I | matrix multiplications. The worstcase arises when | I | = 0 and | L | = | R | = | F | = k . Then the matrix multiplications require O ( k ω k ) time. Since T has O ( m ) edges, this gives a running time of O ( mk ω k i × ( k log( n )))time. (cid:3) .4 [ ρ, σ ] -Domination Problems We have shown how to solve two fundamental graph problems in O ∗ ( s ω k ) time on branchdecompositions of width k , where s is the natural number of states involved in a dynamicprogramming algorithm on branch decompositions for these problem. Similar to the resultson tree decompositions, we generalize this and show that one can solve all [ ρ, σ ]-dominationproblems with finite or cofinite ρ and σ in O ∗ ( s ω k ) time.For the [ ρ, σ ]-domination problems, we use states ρ j and σ j , where ρ j and σ j representthat a vertex is not in or in the vertex set D of the partial solution of the [ σ, ρ ]-dominationproblem, respectively, and has j neighbours in D . This is similar to Section 3.4. Note thatthe number of states used equals s = p + q + 2.On branch decompositions, we have to use a different approach than on tree decomposi-tions, since we have to deal with vertices in L , R , I , and F simultaneously. It is, however,possible to reuse part of the algorithm of Theorem 3.13. Observe that joining two chil-dren in a tree decomposition is similar to joining two children in a branch decomposition if L = R = F = ∅ . Since we have demonstrated in the algorithms earlier in this section thatone can have distinct states and perform different computations on I , L , R , and F , we canessentially use the approach of Theorem 3.13 for the vertices in I . Theorem 4.9
Let ρ, σ ⊆ N be finite or cofinite. There is an algorithm that, given a branchdecomposition of a graph G of width k , counts the number of [ ρ, σ ] -dominating sets of G of eachsize κ , ≤ κ ≤ n , of a fixed [ ρ, σ ] -domination problem involving s states in O ( mn ( sk ) s − s ω k i × ( n )) time. Proof:
Let T be the branch decomposition of G of width k rooted at the vertex z .Recall the definitions of State Sets I and II defined in Definition 3.10. Similar to the proofof Theorem 3.13, we will use different sets of states to prove this theorem. In this proof,we mostly use State Set I while we let the subscripts of the states count only neighboursin D outside the current middle set. That is, we use states ρ j and σ j for our tables A e , A f ,and A g such that the subscripts j represent the number of neighbours in the vertex set D of each partial solution of the [ σ, ρ ]-domination problem outside of the vertex sets X e , X f and X g , respectively. Using these states for colourings c , we compute the table A e for eachedge e ∈ E ( T ) such that the entry A e ( c, κ ) contains the number of partial solutions of the[ ρ, σ ]-domination problem on G e consisting of κ vertices that satisfy the requirements definedby c . Leaf edges : Let e be an edge of T incident to a leaf of T that is not the root. Now, G e = G [ X e ]is a two vertex graph.We compute A e in the following way: A e ( c, κ ) = c = ( ρ , ρ ) and κ = 01 if c = ( ρ , σ ) or c = ( σ , ρ ), and κ = 11 if c = ( σ , σ ) and κ = 20 otherwiseSince the subscripts of the states count only vertices in the vertex set of a partial solutionsof the [ ρ, σ ]-domination problem on G e that are outside the middle set X e , we only countpartial solutions in which the subscripts are zero. Moreover, the size parameter κ must equalthe number of σ -states since these represent vertices in the vertex set of the partial solutions. Internal edges : Let e be an internal edge of T with child edges l and r .The process of computing the table A e by combining the information in the two tables A l and A r is quite technical. This is mainly due to the fact that we need to do different thingson the different vertex sets I , L , R , and F . We will give a three-step proof. Step 1 : As a form of preprocessing, we will update the entries in A l and A r such that thesubscripts will not count only the vertices in vertex sets of the partial solutions outside of X l and X r , but also some specific vertices in the vertex sets of the partial solutions in the middlesets. Later, we will combine the information from A l and A r to create the table A e accordingto the following general rule: combining ρ i and ρ j gives ρ i + j , and σ i and σ j gives σ i + j .In this context, the preprocessing makes sure that the subscripts of the states in the resultin A e correctly count the number of vertices in the vertex sets of the partial solutions of the[ ρ, σ ]-domination problem. ecall that for an edge e of the branch decomposition T the vertex set V e is defined to bethe vertex set of the graph G e , that is, the union of the middle set of e and all middle setsbelow e in T . We update the tables A l and A r such that the subscripts of the states ρ j and σ j count the number of neighbours in the vertex sets of the partial solutions with the followingproperties: • States used in A l on vertices in L or I count neighbours in ( V l \ X l ) ∪ F . • States used in A l on vertices in F count neighbours in ( V l \ X l ) ∪ L ∪ I ∪ F . • States used in A r on vertices in I count neighbours in ( V r \ X r ) (nothing changes here). • States used in A r on vertices in R count neighbours in ( V r \ X r ) ∪ F . • States used in A r on vertices in F count neighbours in ( V r \ X r ) ∪ R .If we now combine partial solutions with state ρ i in A l and state ρ j in A r for a vertex in I ,then the state ρ i + j corresponding to the combined solution in A e correctly counts the numberof neighbours in the partial solution in V e \ X e . Also, states for vertices in L and R in A e count their neighbours in the partial solution in V e \ X e . And, if we combine solutions with astate ρ i in A l and a state ρ j in A r for a vertex in F , then this vertex will have exactly i + j neighbours in the combined partial solution.Although one must be careful which vertices to count and which not to count, the actualupdating of the tables A l and A r is simple because one can see which of the counted verticesare in the vertex set of a partial solution ( σ -state) and which are not ( ρ -state).Let A ∗ y be the table before the updating process with y ∈ { l, r } . We compute the updatedtable A y in the following way: A y ( c, κ ) = (cid:26) φ ( c ) is not a correct colouring of X y A ∗ y ( φ ( c ) , κ ) otherwiseHere, φ is the inverse of the function that updates the subscripts of the states, e.g., if y = l and we consider a vertex in I with exactly one neighbour with a σ -state on a vertex in F in c , then it changes ρ into ρ . The result of this updating is not a correct colouring of X y if the inverse does not exist, i.e., if the strict application of subtracting the right numberof neighbours results in a negative number. For example, this happens if c contains a ρ -or σ -state while it has neighbours that should be counted in the subscripts. Step 2 : Next, we will change the states used for the tables A l and A r , and we will addindex vectors to these tables that allow us to use the ideas of Theorem 3.13 on the verticesin I .We will not change the states for vertices in L in the table A l , nor for vertices in R in thetable A r . But, we will change the states for the vertices in I in both A l and A r and on thevertices in F in A r . On F , simple state changes suffice, while, for vertices on I , we need tochange the states and introduce index vectors at the same time.We will start by changing the states for the vertices in F . On the vertices in F , we willnot change the states in A l , but introduce a new set of states to use for A r . We define thestates ¯ ρ j and ¯ σ j . A table entry with state ¯ ρ j on a vertex v requires that the vertex has anallowed number of neighbours in the vertex set of a partial solution when combined with apartial solution from A l with state ρ j . That is, a partial solution that corresponds to thestate ρ i on v is counted in the entry with state ¯ ρ j on v if i + j ∈ ρ . The definition of ¯ σ j issimilar.Let A ∗ r be the result of the table for the right child r of e obtained by Step 1. We can obtainthe table A r with the states on F transformed as described by a coordinate-wise applicationof the following formula on the vertices in F . The details are identical to the state changesin the proofs of Lemmas 4.2 and 4.7. A r ( c × { ¯ ρ j } × c ) = X i + j ∈ ρ A ∗ r ( c × { ρ i } × c ) A r ( c × { ¯ σ j } × c ) = X i + j ∈ σ A ∗ r ( c × { σ i } × c )Notice that if we combine an entry with state ρ j in A l with an entry with state ¯ ρ j from A r ,then we can count all valid combinations in which this vertex is not in the vertex set of a artial solution of the [ ρ, σ ]-domination problem. The same is true for a combination withstate σ j in A l with state ¯ σ j in A r for vertices in the vertex set of the partial solutions.As a final part of Step 2, we now change the states in A l and A r on the vertices in I andintroduce the index vectors ~i = ( i ρ , i ρ , . . . , i ρp , i σ , i σ , . . . , i σq ), where i ρj and i σj index thesum of the number of neighbours in the vertex set of a partial solution of the [ ρ, σ ]-dominationproblem over the vertices with state ρ ≤ j and σ ≤ j , respectively. That is, we change the statesused in A l and A r on vertices in I to State Set II of Definition 3.10 and introduce indexvectors in exactly the same way as in the proof of Lemma 3.12, but only on the coordinatesof the vertices in I , similar to what we did in the proofs of Lemmas 4.2 and 4.7. Becausethe states ρ ≤ j and σ ≤ j are used only on I , we note that that the component i ρ j of the indexvector ~i count the total number of neighbours in the vertex sets of the partial solutions of the[ ρ, σ ]-domination problem of vertices with state ρ ≤ j on I . As a result, we obtain tables A ′ l and A ′ r with entries A ′ l ( c l , κ l , ~g ) and A ′ r ( c r , κ r ,~h ) with index vectors ~g and ~h , where theseentries have the same meaning as in Theorem 3.13. We note that the components i ρp and i σq of the index vector are omitted if ρ or σ is cofinite, respectively.We have now performed all relevant preprocessing and are ready for the final step. Step 3 : Now, we construct the table A e by computing the number of valid combinationsfrom A ′ l and A ′ r using fast matrix multiplication.We first define when three colourings c e , c l , and c r match. They match if: • For any v ∈ I : c e ( v ) = c l ( v ) = c r ( v ) with State Set II. ( s possibilities) • For any v ∈ F : either c l ( v ) = ρ j and c r ( v ) = ¯ ρ j , or c l ( v ) = σ j and c r ( v ) = ¯ σ j , withState Set I used for c l and the new states used for c r . ( s possibilities) • For any v ∈ L : c e ( v ) = c l ( v ) with State Set I. ( s possibilities) • For any v ∈ R : c e ( v ) = c r ( v ) with State Set I. ( s possibilities)State Set I and State Set II are as defined in Definition 3.10. That is, colourings match ifthey forget valid combinations on F , and have identical states on I , L , and R .Using this definition, the following formula computes the table A ′ e . The function of thistable is identical to the same table in the proof of Theorem 3.13: the table gives all validcombinations of entries corresponding to the colouring c that lead to a partial solution ofsize κ with the given values of the index vector ~i . The index vectors allow us to extract thevalues we need afterwards. A ′ e ( c e , κ,~i ) = X c e ,c l ,c r match X κ l + κ r = κ + σ ( c ) X i ρ = g ρ + h ρ · · · X i σq = g σq + h σq A ′ l ( c l , κ l , ~g ) · A ′ r ( c r , κ r ,~h ) Here, σ = σ ( c r ( I ∪ F )) is the number of vertices that are assigned a σ -state on I ∪ F inany matching triple c e , c l , c r .We will now argue what kind of entries the table A ′ e contains by giving a series of obser-vations. Observation 4.1
For a combination of a partial solutions on G l counted in A ′ l and a partialsolution on G r counted in A ′ r to be counted in the summation for A ′ e ( c, κ,~i ) , it is required thatboth partial solutions contains the same vertices on X l ∩ X r ( = I ∩ F ). Proof:
This holds because sets of matching colourings have a σ -state on a vertex if and onlyif the other colourings in which the same vertex is included also have a σ -state on this vertex. (cid:3) Observation 4.2
For a combination of a partial solutions on G l counted in A ′ l and a partialsolution on G r counted in A ′ r to be counted in the summation for A ′ e ( c, κ,~i ) , it is required thatthe total number of vertices that are part of the combined partial solution is κ . Proof:
This holds because we demand that κ equals the sum of the sizes of the partialsolutions on G l and G r used for the combination minus the number of vertices in thesepartial solutions that are counted in both sides, namely, the vertices with a σ -state on I or F . (cid:3) bservation 4.3 For a combination of a partial solutions on G l counted in A ′ l and a partialsolution on G r counted in A ′ r to be counted in the summation for A ′ e ( c, κ,~i ) , it is required thatthe subscripts j of the states ρ j and σ j used in c on vertices in L and R correctly count thenumber of neighbours of this vertex in V e \ X e in the combined partial solution. Proof:
This holds because of the preprocessing we performed in Step 1. (cid:3)
Observation 4.4
For a combination of a partial solutions on G l counted in A ′ l and a partialsolution on G r counted in A ′ r to be counted in the summation for A ′ e ( c, κ,~i ) , it is requiredthat the forgotten vertices in a combined partial solution satisfy the requirements imposed bythe specific [ ρ, σ ] -domination problem. I.e., if such a vertex is not in the vertex set D of thecombined partial solutions, then it has a number of neighbours in D that is a member of ρ , andif such a vertex is in the vertex set D of the combined partial solution, then it has a numberof neighbours in D that is a member of σ . Moreover, all such combinations are considered. Proof:
This holds because we combine only entries with the states ρ j and ¯ ρ j or with thestates σ j and ¯ σ j for vertices in F . These are all required combinations by definition of thestates ¯ ρ j and ¯ σ j . (cid:3) Observation 4.5
For a combination of a partial solutions on G l counted in A ′ l and a partialsolution on G r counted in A ′ r to be counted in the summation for A ′ e ( c, κ,~i ) , it is required thatthe total sum of the number of neighbours outside X e of the vertices with state ρ ≤ j or σ ≤ j ina combined partial solution equals i ρj or i σj , respectively. Proof:
This holds because of the following. First the subscripts of the states are updatedsuch that every relevant vertex is counted exactly once in Step 1. Then, these numbers arestored in the index vectors at Step 2. Finally, the entries of A ′ e corresponding to a given indexvector combine only partial solutions which index vectors sum to the given index vector ~i . (cid:3) Observation 4.6
Let D l and D r be the vertex set of a partial solution counted in A l and A r that are used to create a combined partial solution with vertex set D , respectively. After thepreprocessing of Step 1, the vertices with state ρ ≤ j or σ ≤ j have at most j neighbours thatwe count in the vertex sets D l and D r , respectively. And, if a vertex in the partial solutionfrom A l has i such counted neighbours in D l , and the same vertex in the partial solutionfrom A r has j such counted neighbours in D r , then the combined partial solution has a totalof i + j neighbours in D outside of X e . Proof:
The last statement holds because we count each relevant neighbour of a vertex eitherin the states used in A l or in the states used in A r by the preprocessing of Step 1. The firstpart of the statement follows from the definition of the states ρ ≤ j or σ ≤ j : here, only partialsolutions that previously had a state ρ i and σ i with i ≤ j are counted. (cid:3) We will now use Observations 4.1-4.6 to show that we can compute the required values for A e in the following way. This works very similar to Theorem 3.13. First, we change the statesin the table A ′ e back to State Set I (as defined in Definition 3.10). We can do so similar as inLemma 3.11 and Lemmas 4.2 and 4.7. Then, we extract the entries required for the table A e using the following formula: A e ( c, κ ) = A ′ e (cid:0) c, κ, (Σ ρ ( c ) , Σ ρ ( c ) , . . . , Σ pρ ( c ) , Σ σ ( c ) , Σ σ ( c ) , . . . , Σ qσ ( c )) (cid:1) Here, Σ lρ ( c ) and Σ lσ ( c ) are defined as in the proof of Theorem 3.13: the weighted sums of thenumber of ρ j - and σ j -states with 0 ≤ j ≤ l , respectively.If ρ or σ is cofinite, we use the same formula but omit the components Σ pρ ( c ) or Σ qσ ( c )from the index vector of the extracted entries, respectively.That the values of these entries equal the values we want to compute follows from thefollowing reasoning. First of all, any combination considered leads to a new partial solutionsince it uses the same vertices (Observation 4.1) and forgets vertices that satisfy the constraintsof the fixed [ ρ, σ ]-domination problem (Observation 4.4). Secondly, the combinations lead tocombined partial solutions of the correct size (Observation 4.2). Thirdly, the subscripts ofthe states used in A e correctly count the number of neighbours of these vertices in the vertex et of the partial solution in V e \ X e . For vertices in L and R , this directly follows fromObservation 4.3 and the fact that for any three matching colourings the states used on eachvertex in L and R are the same. For vertices in I , this follows from exactly the same argumentsas in the last part of the proof of Theorem 3.13 using Observation 4.5 and Observation 4.6.This is the argument where we first argue that any entry which colouring uses only thestates ρ and σ is correct, and thereafter inductively proceed to ρ j and σ j for j > ρ j − and σ j − and fact that we use the entries corresponding to the chosenvalues of the index vectors.All in all, we see that this procedure correctly computes the required table A e .After computing A e in the above way for all e ∈ E ( T ), we can find the number of [ ρ, σ ]-dominating sets of each size in the table A { y,z } , where z is the root of T and y its only childbecause G = G { y,z } and X { y,z } = ∅ .For the running time, we note that we have to compute the tables A e for the O ( m ) edges e ∈ E ( T ). For each table A e , the running time is dominated by evaluating the formula forthe intermediate table A ′ e with entries A ′ e ( c, κ,~i ). We can evaluate each summand of theformula for A ′ e for all combinations of matching states by s | I | matrix multiplications as inTheorem 4.4. This requires O ( n ( sk ) s − s | I | ) multiplications of an s | L | × s | F | matrix andan s | F | × s | R | matrix. The running time is maximal if | I | = 0 and | L | = | R | = | F | = k .In this case, the total running time equals O ( mn ( sk ) s − s ω k i × ( n )) since we can do thecomputations using n -bit numbers. (cid:3) Similar to our results on the [ ρ, σ ]-domination problem on tree decompositions, we can im-prove the polynomial factors of the above algorithm in several ways. The techniques involvedare identical to those of Corollaries 3.14, 3.15, and 3.16. Similar to Section 3.4, we define thevalue r associated with a [ ρ, σ ]-domination problems as follows: r = max { p − , q − } if ρ and σ are cofinitemax { p, q − } if ρ is finite and σ is cofinitemax { p − , q } if ρ is confinite and σ is finitemax { p, q } if ρ and σ are finite Corollary 4.10 (General [ ρ, σ ]-Domination Problems)
Let ρ, σ ⊆ N be finite or cofi-nite, and let p , q , r and s be the values associated with the corresponding [ ρ, σ ] -dominationproblem. There is an algorithm that, given a branch decomposition of a graph G of width k ,computes the number of [ ρ, σ ] -dominating sets in G of each size κ , ≤ κ ≤ n , in O ( mn ( rk ) r s ω k i × ( n )) time. Moreover, there is an algorithm that decides whether there exist a [ ρ, σ ] -dominating setof size κ , for each individual value of κ , ≤ κ ≤ n , in O ( mn ( rk ) r s ω k i × ( log ( n ) + k log( r ))) time. Proof:
Apply the modifications to the algorithm of Theorem 3.13 that we have used in theproof of Corollary 3.14 for [ ρ, σ ]-domination problems on tree decompositions to the algorithmof Theorem 4.9 for the same problems on branch decompositions. (cid:3)
Corollary 4.11 ([ ρ, σ ]-Optimisation Problems with the de Fluiter Property)
Let ρ, σ ⊆ N be finite or cofinite, and let p , q , r and s be the values associated with the correspond-ing [ ρ, σ ] -domination problem. If the standard representation using State Set I of the min-imisation (or maximisation) variant of this [ ρ, σ ] -domination problem has the de Fluiterproperty for treewidth with function f , then there is an algorithm that, given a branch de-composition of a graph G of width k , computes the number of minimum (or maximum) [ ρ, σ ] -dominating sets in G in O ( m [ f ( k )] ( rk ) r s ω k i × ( n )) time. Moreover, there is an al-gorithm that computes the minimum (or maximum) size of such a [ ρ, σ ] -dominating set in O ( m [ f ( k )] ( rk ) r s ω k i × ( log ( n ) + k log( r ))) time. Proof:
Improve the result of Corollary 4.10 in the same way as Corollary 3.15 improvesCorollary 3.14 on tree decompositions. (cid:3)
Corollary 4.12 ([ ρ, σ ]-Decision Problems)
Let ρ, σ ⊆ N be finite or cofinite, and let p , q , r and s be the values associated with the corresponding [ ρ, σ ] -domination problem. There isan algorithm that, given a branch decomposition of a graph G of width k , counts the number f [ ρ, σ ] -dominating sets in G of a fixed [ ρ, σ ] -domination problem in O ( m ( rk ) r s ω k i × ( n )) time. Moreover, there is an algorithm that decides whether there exists a [ ρ,σ ] -dominating setin O ( m ( rk ) r s ω k i × ( log ( n ) + k log( r ))) time. Proof:
Improve the result of Corollary 4.11 in the same way as Corollary 3.16 improves uponCorollary 3.15 on tree decompositions. (cid:3)
On graphs of bounded cliquewidth, we mainly consider the
Dominating Set problem. Weshow how to improve the complex O ∗ (8 k )-time algorithm, which computes a boolean decom-position of a graph of cliquewidth at most k to solve the Dominating Set problem [18], toan O ∗ (4 k )-time algorithm. Similar results for Independent Dominating Set and
TotalDominating Set follow from the same approach.
Theorem 5.1
There is an algorithm that, given a k -expression for a graph G , computes thenumber of dominating sets in G of each size ≤ κ ≤ n in O ( n ( k + i × ( n )) 4 k ) time. Proof:
An operation in a k -expression applies a procedure on zero, one, or two labelled graphswith labels { , , . . . , k } that transforms these labelled graphs into a new labelled graph withthe same set of labels. If H is such a labelled graph with vertex set V , then we use V ( i ) todenote the vertices of H with label i .For each labelled graph H obtained by applying an operation in a k -expression, we willcompute a table A with entries A ( c, κ ) that store the number of partial solutions of Domi-nating Set of size κ that satisfy the constraints defined by the colouring c . In contrast to thealgorithms on tree and branch decompositions, we do not use colourings that assign a stateto each individual vertex, but colourings that assign states to the sets V (1) , V (2) , . . . , V ( k ).The states that we use are similar to the ones used for Dominating Set on tree de-compositions and branch decomposition. The states that we use are tuples representing twoattributes: inclusion and domination. The first attribute determines whether at least onevertex in V ( i ) is included in a partial solution. We use states 1, 0, and ? to indicate whetherthis is true, false, or any of both, respectively. The second attribute determines whether allvertices of V ( i ) are dominated in a partial solution. Here, we also use states 1, 0, and ? to in-dicate whether this is true, false, or any of both, respectively. Thus, we get tuples of the form( s, t ), where the first components is related to inclusion and the second to domination, e.g.,(1 , ?) for vertex set V ( i ) represents that the vertex set contains a vertex in the dominatingset while we are indifferent about whether all vertices in V ( i ) are dominated.We will now show how to compute the table A for a k -expression obtained by using anyof the four operations on smaller k -expressions that are given with similar tables for thesesmaller k -expressions. This table A contains an entry for every colouring c of the series ofvertex sets { V (1) , V (2) , . . . , V ( k ) } using the four states (1 , , , , A for the k -expression that evaluates to G . Create a new graph : In this operation, we create a new graph H with one vertex v with anylabel j ∈ { , , . . . , k } . We assume, without loss of generality by permuting the labels, that j = k . We compute A by using the following formula where c is a colouring of the first k − V ( i ) and c k is the state of V ( k ): A ( c × { c k } , κ ) = c k = (1 , κ = 1, and c = { (0 , } k − c k = (0 , κ = 0, and c = { (0 , } k − H has only one vertex and this vertex has label k , the vertex sets for the other labelscannot have a dominating vertex, therefore the first attribute of their state must be 0. Also,all vertices in these sets are dominated, hence the second attribute of their state must be 1.The above formula counts the only two possibilities: either taking the vertex in the partialsolution or not. elabel : In this operation, all vertices with some label i ∈ { , , . . . , k } are relabelled suchthat they obtain the label j ∈ { , , . . . , k } , j = i . We assume, without loss of generality bypermuting the labels, that i = k and j = k − A ′ be the table belonging to the k -expression before the relabelling and let A be thetable we need to compute. We compute A using the following formulas: A ( c ×{ (0 , }×{ ( i, d ) } , κ ) = X i ,i ∈{ , } max { i ,i } = i X d ,d ∈{ , } min { d ,d } = d A ′ ( c ×{ ( i , d ) }×{ ( i , d ) } , κ ) A ( c ×{ ( i ∗ , d ∗ ) }×{ ( i, d ) } , κ ) = 0 if ( i ∗ , d ∗ ) = (0 , A because of the following observations. For V ( i ),the first attribute must be 0 and the second attribute must be 1 in any valid partial solutionbecause V ( i ) = ∅ after the operations; this is similar to this requirement in the ‘create newgraph’ operation. If V ( j ) must have a vertex in the dominating set, then this vertex must bein V ( i ) or V ( j ) originally. And, if all vertices in V ( j ) must be dominated, then all verticesin V ( i ) and V ( j ) must be dominated. Note that the minimum and maximum under thesummations correspond to ‘and’ and ‘or’ operations, respectively. Add edges : In this operation, all vertices with some label i ∈ { , , . . . , k } are connected toall vertices with another label j ∈ { , , . . . , k } , j = i . We again assume, without loss ofgenerality by permuting the labels, that i = k − j = k .Let A ′ be the table belonging to the k -expression before adding the edges and let A bethe table we need to compute. We compute A using the following formula: A ( c × { ( i , d ) } × { ( i , d ) } , κ ) = X d ′ ∈{ , } max { d ′ ,i } = d X d ′ ∈{ , } max { d ′ ,i } = d A ( c × ( i , d ′ ) × ( i , d ′ ) , κ )This formula is correct as a vertex sets V ( i ) and V ( j ) contain a dominating vertex if andonly if they contained such a vertex before adding the edges. For the property of domination,correctness follows because the vertex sets V ( i ) and V ( j ) are dominated if and only if theywere either dominated before adding the edges, or if they become dominated by a vertex fromthe other vertex set because of the adding of the edges. Join graphs : This operation joins two labelled graphs H and H with tables A and A to alabelled graph H with table A . To do this efficiently, we first apply state changes similar tothose used in Sections 3 and 4. We use states 0 and ? for the first attribute (inclusion) andstates 1 and ? for the second attribute (domination).Changing A and A to tables A ∗ and A ∗ that use this set of states can be done in asimilar manner as in Lemmas 3.5 and 4.2. We first copy A y into A ∗ y , for y ∈ { , } and theniteratively use the following formulas in a coordinate-wise manner: A ∗ y ( c × (0 , × c , κ ) = A ∗ y ( c × (0 , × c , κ ) (1) A ∗ y ( c × (? , × c , κ ) = A ∗ y ( c × (1 , × c , κ ) + A ∗ y ( c × (0 , × c , κ ) (2) A ∗ y ( c × (0 , ?) × c , κ ) = A ∗ y ( c × (0 , × c , κ ) + A ∗ y ( c × (0 , × c , κ ) (3) A ∗ y ( c × (? , ?) × c , κ ) = A ∗ y ( c × (1 , × c , κ ) + A ∗ y ( c × (1 , × c , κ ) + (4) A ∗ y ( c × (0 , × c , κ ) + A ∗ y ( c × (0 , × c , κ ) (5)We have already seen many state changes similar to these in Sections 3 and 4. Therefore,it is not surprising that we can now compute the table A ∗ in the following way, where thetable A ∗ is the equivalent of the table A we want to compute only using the different set ofstates: A ∗ ( c, κ ) = X κ + κ = κ A ∗ ( c, κ ) · A ∗ ( c, κ )Next, we apply state changes to obtain A from A ∗ . These state changes are the inverse ofthose given above. Again, first copy A ∗ into A and then iteratively transform the states in a oordinate-wise manner using the following formulas: A ( c × (0 , × c , κ ) = A ( c × (0 , × c , κ ) A ( c × (1 , × c , κ ) = A ( c × (? , × c , κ ) − A ( c × (0 , × c , κ ) A ( c × (0 , × c , κ ) = A ( c × (0 , ?) × c , κ ) − A ( c × (0 , × c , κ ) A ( c × (1 , × c , κ ) = A ( c × (? , ?) × c , κ ) − A ( c × (0 , ?) × c , κ ) − A ( c × (? , × c , κ ) + A ( c × (0 , × c , κ )Correctness of the computed table A follows by exactly the same reasoning as used in Theo-rem 3.3 and in Proposition 4.3. We note that the last of the above formulas is a nice exampleof an application of the principle of inclusion/exclusion: to find the number of sets corre-sponding to the (1 ,
0) -state, we take the number of sets corresponding to the (? , ?)-state;then, we subtract what we counted to much, but because we subtract some sets twice, weneed to add some number of sets again to obtain the required value.The number of dominating sets in G of size κ can be computed from the table A relatedto the final operation of the k -expression for G . In this table, we consider only the entries inwhich the second property is 1, i.e., the entries corresponding to partial solutions in which allvertices in G are dominated. Now, the number of dominating sets in G of size κ equals thesum over all entries A ( c, κ ) with c ∈ { (0 , , (1 , } .For the running time, we observe that each of the O ( n ) join operations take O ( n k i × ( n ))time because we are multiplying n -bit numbers. Each of the O ( nk ) other operations take O ( n k ) time since we need O ( n k ) series of a constant number of additions using n -bitnumbers, and i + ( n ) = O ( n ). The running time of O ( n ( k + i × ( n )) 4 k ) follows. (cid:3) Similar to the algorithms for
Dominating Set on tree decompositions and branch decom-positions in Section 3.2 and 4.2, we can improve the polynomial factors in the running time ifwe are interested only in the size of a minimum dominating set, or the number of these sets.To this end, we will introduce a notion of a de Fluiter property for cliquewidth. This notionis defined similarly to the de Fluiter property for treewidth; see Definition 3.2.
Definition 5.2 (de Fluiter property for cliquewidth)
Consider a method to representthe different partial solutions used in an algorithm that performs dynamic programming onclique decompositions ( k -expressions) for an optimisation problem Π . Such a representationhas the de Fluiter property for cliquewidth if the difference between the objective values ofany two partial solutions of Π that are stored for a partially evaluated k -expression and thatcan both still lead to an optimal solution is at most f ( k ) , for some function f . Here, thefunction f depends only on the cliquewidth k . The definition of the de Fluiter property for cliquewidth is very similar to the same notionfor treewidth. However, the structure of a k -expression is different from tree decompositionsand branch decompositions in such a way that the de Fluiter property for cliquewidth does notappear to be equivalent to the other two. This in contrast to the same notion for branchwidththat is equivalent to this notion for treewidth; see Section 4.1. The main difference is that k -expressions deal with sets of equivalent vertices instead of the vertices themselves.The representation used in the algorithm for the Dominating Set problem above alsohas the de Fluiter property for cliquewidth.
Lemma 5.3
The representation of partial solutions for the
Dominating Set problem usedin Theorem 5.1 has the de Fluiter property for cliquewidth with f ( k ) = 2 k . Proof:
Consider any partially constructed graph H from a partial bottom-up evaluation ofthe k -expression for a graph G , and let S be the set of vertices of the smallest remainingpartial solution stored in the table for the subgraph H . We prove the lemma by showing thatby adding at most 2 k vertices to S , we can dominate all future neighbours of the vertices in H and all vertices in H that will receive future neighbours. We can restrict ourselves to addingvertices to S that dominate these vertices and not vertices in H that do not receive futureneighbours, because Definition 5.2 considers only partial solutions on H that can still lead toan optimal solution on G . Namely, a vertex set V ( i ) that contains undominated vertices thatwill not receive future neighbours when continuing the evaluation of the k -expression will notlead to an optimal solution on G . This is because the selection of the vertices that will be ina dominating set happens only in the ‘create a new graph’ operations. e now show that by adding at most k vertices to S , we can dominate all vertices in H ,and by adding another set of at most k vertices to S , we can dominated all future neighboursof the vertices in H . To dominate all future neighbours of the vertices in H , we can pickone vertex from each set V ( i ). Next, consider dominating the vertices in each of the vertexsets V ( i ) and are not yet dominated and that will receive future neighbours. Since the ‘addedges’ operations of a k -expression can only add edges between future neighbours and allvertices with the label i , and since the ‘relabel’ operation can only merges the sets V ( i ) andnot split them, we can add a single vertex to S that is a future neighbour of a vertex in V ( i )to dominate all vertices in V ( i ). (cid:3) Using this property, we can easily improve the result of Theorem 5.1 for the case wherewe want to count only the number of minimum dominating sets. This goes in a way similarto Corollaries 3.4, 3.6, 4.5, and 4.6.
Corollary 5.4
There is an algorithm that, given a k -expression for a graph G , computes thenumber of minimum dominating sets in G in O ( nk k i × ( n )) time. Proof:
For each colouring c , we maintain the size B ( c ) of any minimum partial dominatingset inducing c , and the number A ( c ) of such sets. This can also be seen as a table D ( c ) oftuples. Define a new function ⊕ such that( A ( c ) , B ( c )) ⊕ ( A ( c ′ ) , B ( c ′ )) = (cid:26) ( A ( c ) + A ( c ′ ) , B ( c )) if B ( c ) = B ( c ′ )( A ( c ∗ ) , B ( c ∗ )) otherwisewhere c ∗ = arg min { B ( c ) , B ( c ′ ) } . We will use this function to ensure that we count onlydominating sets of minimum size.We now modify the algorithm of Theorem 5.1 to use the tables D . For the first threeoperations, simply omit the size parameter κ from the formula and replace any + by ⊕ . Forinstance, the computation for the third operation that adds new edges connecting all verticeswith label V ( i ) to all vertices with label V ( j ), becomes: D ( c × { ( i , d ) } × { ( i , d ) } ) = M d ′ ∈{ , } max { d ′ ,i } = d M d ′ ∈{ , } min { d ′ ,i } = d D ( c × ( i , d ′ ) × ( i , d ′ ))For the fourth operation, where we take the union of two labelled graphs, we need to bemore careful. Here, we use that the given representation of partial solutions has the de Fluiterproperty for cliquewidth. We first discard solutions that contain vertices that are undominatedand will not receive new neighbours in the future, that is, we set the corresponding table entriesto D ( c ) = ( ∞ , k largerthan the minimum remaining solution.Let D ( c ) = ( A ( c ) , B ( c )) and D ( c ) = ( A ( c ) , B ( c )) be the two resulting tables for thetwo labelled graphs H and H we need to join. To perform the join operation, we constructtables A ( c, κ ) and A ( c, κ ) as follows, with y ∈ { , } : A y ( c, κ ) = (cid:26) A y ( c ) if B y ( c ) = κ κ has a range of size 2 k and thus this table has size O ( k k ).Now, we can apply the same algorithm for the join operations as described in Theorem 5.1.Afterwards, we retrieve the value of D ( c ) by setting A ( c ) = A ( c, κ ′ ) and B ( c ) = κ ′ , where κ ′ is the smallest value of κ for which A ( c, κ ) is non-zero.For the running time, we observe that each of the O ( k n ) operations that create a newgraph, relabel vertex sets, or add edges to the graph compute O (4 k ) tuples that cost O ( i + ( n ))time each since we use a constant number of additions and comparisons of an log( n )-bitsnumber and an n -bits number. Each of the O ( n ) join operations cost O ( k k i × ( n )) timebecause of the reduced table size. In total, this gives a running time of O ( nk k i × ( n )). (cid:3) Finally, we show that one can use O ( k )-bit numbers when considering the decision versionof this minimisation problem instead of the counting variant. Corollary 5.5
There is an algorithm that, given a k -expression for a graph G , computes thesize of a minimum dominating sets in G in O ( nk k ) time. roof: Maintain only the size B ( c ) of any partial solution satisfying the requirements of thecolouring c in the computations involved in any of the first three operations. Store this tableby maintaining the size ξ of the smallest solution in B that has no undominated vertices thatwill not get future neighbours, and let B contain O (log( k ))-bit numbers giving the differencein size between the size of the partial solutions and the number ξ ; this is similar to, forexample, Corollary 3.6.For the fourth operation, follow the same algorithm as in Corollary 5.4, using A ( c, κ ) = 1if B ( c ) = κ and A ( c, κ ) = 0 otherwise. Since the total sum of all entries in this table is 4 k , thecomputations for the join operation can now be implemented using O ( k )-bit numbers. Seealso, Corollaries 3.6 and 4.6. In the computational model with O ( k )-bit word size that we use,the term in the running time for the arithmetic operations disappears since i × ( k ) = O (1). (cid:3) We conclude by noticing that O ∗ (4 k ) algorithms for Independent Dominating Set and
Total Dominating Set follow from the same approach. For
Total Dominating Set , wehave to change only the fact that a vertex does not dominate itself at the ’create new graph’operations. For
Independent Dominating Set , we have to incorporate a check such thatno two vertices in the solution set become neighbours in the ‘add edges’ operation.
In the previous sections, we have defined de Fluiter properties for all three types of graphdecompositions. This property is highly related to the concept finite integer index as definedin [16]. Finite integer index is a property used in reduction algorithms for optimisationproblems on graphs of small treewidth [16] and is also used in meta results in the theoryof kernelisation [11]. We will conclude by explaining the relation between the de Fluiterproperties and finite integer index.We start with a series of definitions. Let a terminal graph be a graph G together with anordered set of distinct vertices X = { x , x , . . . , x l } with each x i ∈ V . The vertices x i ∈ X are called the terminals of G . For two terminal graphs G and G with the same numberof terminals, the addition operation G + G is defined to be the operation that takes thedisjoint union of both graphs, then identifies each pair of terminals with the same number1 , , . . . , t , and finally removes any double edges created.For a graph optimisation problem Π, Bodlaender and van Antwerpen-de Fluiter define anequivalence relation ∼ Π ,l on terminal graphs with l terminals [16]: G ∼ Π ,l G if and only ifthere exists an i ∈ Z such that for all terminal graphs H with l terminals: π ( G + H ) = π ( G + H ) + i Here, the function π ( G ) assigns the objective value of an optimal solution of the optimisationproblem Π to the input graph G . Definition 6.1 (finite integer index)
An optimisation problem Π is of finite integer index if ∼ Π ,l has a finite number of equivalence classes for each fixed l . When one proves that a problem has finite integer index, one often gives a representationof partial solutions that has the de Fluiter property for treewidth; see for example [29]. Thatone can prove that a problem has finite integer index in this way can see from the followingproposition.
Proposition 6.2
If a problem Π has a representation of its partial solutions of differentcharacteristics that can be used in an algorithm that performs dynamic programming on treedecompositions and that has the de Fluiter property for treewidth, then Π is of finite integerindex. Proof:
Let l be fixed, and consider an l -terminal graph G . Construct a tree decomposition T of G such that bag associated the root of T equals the set of terminals X of G . Note thatthis is always possible since we have not specified a bound on the treewidth of T . For an l -terminal graph H , one can construct a tree decomposition of G + H by making a similartree decomposition of H and identifying the roots, which both have the same vertex set X . et G , G be two l -terminal graphs to which we both add another l -terminal graph H through addition, i.e., G i + H , and let T , T be tree decompositions of these graphs obtainedin the above way. For both graph, consider the dynamic programming table constructed forthe node x X associated with the vertex set X by a dynamic programming algorithm for Πthat has the de Fluiter property for treewidth. For these tables, we assume that the inducedsubgraph associated with x X of the decompositions equals G i , that is, the bags of the nodesbelow x X contain all vertices in G i , and vertices in H only occur in bags associated withnodes that are not descendants of x X in T i .Clearly, π ( G + H ) = π ( G + H ) if both dynamic programming tables are the sameand G [ X ] = G [ X ], that is, if the tables are equal and both graphs have the same edgesbetween their terminals. Let us now consider a more general case where we first normalise thedynamic programming tables such that the smallest valued entry equals zero, and all otherentries contain the difference in value to this smallest entry. In this case, it is not hard tosee that if both normalised dynamic programming tables are equal and G [ X ] = G [ X ], thenthere must exists an i ∈ Z such that π ( G + H ) = π ( G + H ) + i .The dynamic programming algorithm for the problem Π can compute only finite sizetables. Moreover, as the representation used by the algorithm has the de Fluiter property fortreewidth, the normalised tables can only have values in the range 0 , , . . . , f ( k ). Therefore,there are only a finite number of different normalised tables and a finite number of possibleinduced subgraphs on l vertices (terminals). We conclude that the relation ∼ Π ,l has a finitenumber of equivalence classes. (cid:3) By the same Proposition it follows that problems that are not of finite integer index (e.g.,
Independent Dominating Set ) do not have a representation of partial solutions that hasthe de Fluiter property for treewidth. We note that the converse of Proposition 6.2 is notnecessarily true.While we focused on the relation between finite integer index and the de Fluiter propertyfor treewidth (or branchwidth), we do not know the relation between these concepts and thede Fluiter property for cliquewidth. This property seems to be very different to the othertwo. Finding the details on the relations between all these properties is beyond the scope ofthis paper as we only used the de Fluiter properties to improve the polynomial factors in therunning times of the presented algorithms.
We have presented faster algorithms for a broad range of problems on three different typesof graph decompositions. These algorithms were obtained by using generalisations of the fastsubset convolution algorithm, sometimes combined with using fast multiplication of matri-ces. On tree decompositions and clique decompositions the exponential factor in the runningtimes equal the space requirement for such algorithms. On branch decompositions, the run-ning times of our algorithms come very close to this space bound. Additionally, a furtherimprovement of the exponential factor in the running time for some problems on tree decom-positions would contradict the Strong Exponential-Time Hypothesis.We like to mention that, very recently, O ∗ ( c k )-time algorithms for various problems ontree decompositions have been obtained, for some constant c ≥
2, for which previously only O ∗ ( k k )-time algorithms existed [28]. This includes problems like Hamilton Cycle , Feed-back Vertex Set , Steiner Tree , Connected Dominating Set . Our techniques playan important role in this paper to make sure that the constants c in these algorithms aresmall and equal the space requirement. Here, c is often also small enough such that no fasteralgorithms exist under the Strong Exponential-Time Hypothesis [28]. It would be interestingto find a general result stating which properties a problem (or join operation on nice treedecompositions) must have to admit O ( c k )-time algorithms, where c is the space requirementof the dynamic programming algorithm.To conclude, we note that, for some problems like counting perfect matchings, the runningtimes of our algorithms on branch decompositions come close to the running times of thecurrently-fastest exact exponential-time algorithms for these problems. For this we use thatthe branchwidth of any graph is at most n , e.g., see [47]. In this way, we directly obtain an O (2 ω · n ) = O (1 . n )-time algorithm for counting the number of perfect matchings. This unning time is identical to the fast-matrix-multiplication-based algorithm for this problemby Bj¨orklund et al. [5]. We note that this result has recently been improved to O (1 . n )by Koivisto [52]. Our algorithm improves this result on graphs for which we can compute abranch decomposition of width at most 0 . n in polynomial time; this is a very large familyof graphs since this bound is not much smaller that the given upper bound of n . Acknowledgements
The first author is grateful to Jan Arne Telle for introducing him to several problems solvedin this paper at Dagstuhl seminar 08431, and for the opportunity to visit Bergen to work withMartin Vatshelle. The first author also thanks Jesper Nederlof for several useful discussions.
References [1] Jochen Alber, Hans L. Bodlaender, Henning Fernau, Ton Kloks, and Rolf Niedermeier.Fixed parameter algorithms for dominating set and related problems on planar graphs.
Algorithmica , 33(4):461–493, 2002.[2] Jochen Alber and Rolf Niedermeier. Improved tree decomposition based algorithms fordomination-like problems. In Sergio Rajsbaum, editor, , volume 2286 of
Lecture Notes in ComputerScience , pages 613–628. Springer, 2002.[3] Stefan Arnborg, Derek G. Corneil, and Andrzej Proskurowski. Complexity of findingembeddings in a k -tree. SIAM Journal on Algebraic and Discrete Methods , 8(2):277–284,1987.[4] Nadja Betzler, Michael R. Fellows, Christian Komusiewicz, and Rolf Niedermeier. Pa-rameterized algorithms and hardness results for some graph motif problems. In PaoloFerragina and Gad M. Landau, editors, , volume 5029 of
Lecture Notes in Computer Science , pages31–43. Springer, 2008.[5] Andreas Bj¨orklund and Thore Husfeldt. Exact algorithms for exact satisfiability andnumber of perfect matchings.
Algorithmica , 52(2):226–249, 2008.[6] Andreas Bj¨orklund, Thore Husfeldt, Petteri Kaski, and Mikko Koivisto. Fourier meetsm¨obius: fast subset convolution. In David S. Johnson and Uriel Feige, editors, , pages 67–74. ACM, 2007.[7] Andreas Bj¨orklund, Thore Husfeldt, and Mikko Koivisto. Set partitioning via inclusion-exclusion.
SIAM Journal on Computing , 39(2):546–563, 2009.[8] Hans L. Bodlaender. Polynomial algorithms for graph isomorphism and chromatic indexon partial k -trees. Journal of Algorithms , 11(4):631–643, 1990.[9] Hans L. Bodlaender. A linear-time algorithm for finding tree-decompositions of smalltreewidth.
SIAM Journal on Computing , 25(6):1305–1317, 1996.[10] Hans L. Bodlaender. A partial k -arboretum of graphs with bounded treewidth. Theo-retical Computer Science , 209(1-2):1–45, 1998.[11] Hans L. Bodlaender, Fedor V. Fomin, Daniel Lokshtanov, Eelko Penninkx, SaketSaurabh, and Dimitrios M. Thilikos. (Meta) kernelization. In , pages 629–638. IEEE ComputerSociety, 2009.[12] Hans L. Bodlaender and Arie M. C. A. Koster. Combinatorial optimization on graphsof bounded treewidth.
The Computer Journal , 51(3):255–269, 2008.[13] Hans L. Bodlaender and Arie M. C. A. Koster. Treewidth computations i. upper bounds.
Information and Computation , 208(3):259–275, 2010.[14] Hans L. Bodlaender and Rolf H. M¨ohring. The pathwidth and treewidth of cographs.
SIAM Journal on Discrete Mathematics , 6(2):181–188, 1993.
15] Hans L. Bodlaender and Dimitrios M. Thilikos. Constructive linear time algorithmsfor branchwidth. In Pierpaolo Degano, Roberto Gorrieri, and Alberto Marchetti-Spaccamela, editors, , volume 1256 of
Lecture Notes in Computer Science , pages 627–637. Springer, 1997.[16] Hans L. Bodlaender and Babette van Antwerpen-de Fluiter. Reduction algorithms forgraphs of small treewidth.
Information and Computation , 167(2):86–119, 2001.[17] Hans L. Bodlaender and Johan M. M. van Rooij. Exact algorithms for intervalisingcolored graphs. To appear.[18] Binh-Minh Bui-Xuan, Jan Arne Telle, and Martin Vatshelle. Boolean-width of graphs.In Jianer Chen and Fedor V. Fomin, editors, , volume 5917 of
Lecture Notes in ComputerScience , pages 61–74. Springer, 2009.[19] Binh-Minh Bui-Xuan, Jan Arne Telle, and Martin Vatshelle. H-join decomposable graphsand algorithms with runtime single exponential in rankwidth.
Discrete Applied Mathe-matics , 158(7):809–819, 2010.[20] Mathieu Chapelle. Parameterized complexity of generalized domination problems onbounded tree-width graphs.
The Computing Research Repository , abs/1004.2642, 2010.[21] William Cook and Paul D. Seymour. An algorithm for the ring-routing problem. Bellcoretechnical memorandum, Bellcore, 1993.[22] William Cook and Paul D. Seymour. Tour merging via branch-decomposition.
INFORMSJournal on Computing , 15(3):233–248, 2003.[23] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progres-sions.
Journal of Symbolic Computation , 9(3):251–280, 1990.[24] Derek G. Corneil, Michel Habib, Jean-Marc Lanlignel, Bruce A. Reed, and Udi Rotics.Polynomial time recognition of clique-width ≤ LATIN ,pages 126–134, 2000.[25] Bruno Courcelle, Joost Engelfriet, and Grzegorz Rozenberg. Handle-rewriting hyper-graph grammars.
Journal of Computer and System Sciences , 46(2):218–270, 1993.[26] Bruno Courcelle, Johann A. Makowsky, and Udi Rotics. Linear time solvable opti-mization problems on graphs of bounded clique-width.
Theory of Computing Systems ,33(2):125–150, 2000.[27] Bruno Courcelle and Stephan Olariu. Upper bounds to the clique width of graphs.
Discrete Applied Mathematics , 101(1-3):77–114, 2000.[28] Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Micha l Pilipczuk, Johan M. M. vanRooij, and Jakub Onufry Wojtaszczyk. Solving connectivity problems parameterized bytreewidth in single exponential time. In . IEEE Computer Society, 2011. to appear.[29] Babette de Fluiter.
Algorithms for Graphs of Small Treewidth . PhD thesis, UtrechtUniversity, 1997.[30] Erik D. Demaine, Fedor V. Fomin, Mohammad T. Hajiaghayi, and Dimitrios M. Thilikos.Subexponential parameterized algorithms on bounded-genus graphs and h -minor-freegraphs. Journal of the ACM , 52(6):866–893, 2005.[31] Erik D. Demaine and Mohammad T. Hajiaghayi. The bidimensionality theory and itsalgorithmic applications.
The Computer Journal , 51(3):292–302, 2008.[32] Frederic Dorn. Dynamic programming and fast matrix multiplication. In Yossi Azar andThomas Erlebach, editors, ,volume 4168 of
Lecture Notes in Computer Science , pages 280–291. Springer, 2006.[33] Frederic Dorn.
Designing Subexponential Algorithms: Problems, Techniques & Structures .PhD thesis, Department of Informatics, University of Bergen, Bergen, Norway, 2007.[34] Frederic Dorn. Dynamic programming and planarity: Improved tree-decomposition basedalgorithms.
Discrete Applied Mathematics , 158(7):800–808, 2010.
35] Frederic Dorn, Fedor V. Fomin, and Dimitrios M. Thilikos. Catalan structures anddynamic programming in H-minor-free graphs. In Shang-Hua Teng, editor, , pages 631–640. SIAM, 2008.[36] Frederic Dorn, Eelko Penninkx, Hans L. Bodlaender, and Fedor V. Fomin. Efficientexact algorithms on planar graphs: Exploiting sphere cut branch decompositions. InGerth Stølting Brodal and Stefano Leonardi, editors, , volume 3669 of
Lecture Notes in Computer Science , pages95–106, 2005.[37] David Eppstein. Diameter and treewidth in minor-closed graph families.
Algorithmica ,27(3):275–291, 2000.[38] Michael R. Fellows, Frances A. Rosamond, Udi Rotics, and Stefan Szeider. Clique-widthis NP-complete.
SIAM Journal on Discrete Mathematics , 23(2):909–939, 2009.[39] Fedor V. Fomin, Serge Gaspers, Saket Saurabh, and Alexey A. Stepanov. On two tech-niques of combining branching and treewidth.
Algorithmica , 54(2):181–207, 2009.[40] Fedor V. Fomin and Dimitrios M. Thilikos. A simple and fast approach for solvingproblems on planar graphs. In Volker Diekert and Michel Habib, editors, , volume 2996 of
Lecture Notesin Computer Science , pages 56–67. Springer, 2004.[41] Fedor V. Fomin and Dimitrios M. Thilikos. Dominating sets in planar graphs: Branch-width and exponential speed-up.
SIAM Journal on Computing , 36(2):281–309, 2006.[42] Michael L. Fredman and Dan E. Willard. Surpassing the information theoretic boundwith fusion trees.
Journal of Computer and System Sciences , 47(3):424–436, 1993.[43] Martin F¨urer. Faster integer multiplication.
SIAM Journal on Computing , 39(3):979–1005, 2009.[44] Robert Ganian and Petr Hlinen´y. On parse trees and Myhill-Nerode-type tools forfandling graphs of bounded rank-width.
Discrete Applied Mathematics , 158(7):851–867,2010.[45] Torben Hagerup. Sorting and searching on the word RAM. In M. Morvan, C. Meinel,and D. Krob, editors, , volume 1373 of
Lecture Notes in Computer Science , pages 366–398. Springer, 1998.[46] Illya V. Hicks. Branchwidth heuristics.
Congressus Numerantium , 159:31–50, 2002.[47] Illya V. Hicks. Graphs, branchwidth, and tangles! Oh my!
Networks , 45(2):55–60, 2005.[48] Illya V. Hicks, Arie M. C. A. Koster, and Elif Kolotoˇglu. Branch and tree decompositiontechniques for discrete optimization.
Tutorials in Operations Research 2005 , pages 1–19,2005.[49] Alon Itai and Michael Rodeh. Finding a minimum circuit in a graph.
SIAM Journal onComputing , 7(4):413–423, 1978.[50] Ton Kloks.
Treewidth, Computations and Approximations , volume 842 of
Lecture Notesin Computer Science . Springer, 1994.[51] Daniel Kobler and Udi Rotics. Edge dominating set and colorings on graphs with fixedclique-width.
Discrete Applied Mathematics , 126(2-3):197–221, 2003.[52] Mikko Koivisto. Partitioning into sets of bounded cardinality. In Jianer Chen and Fe-dor V. Fomin, editors, , volume 5917 of
Lecture Notes in Computer Science , pages 258–263.Springer, 2009.[53] Ephraim Korach and Nir Solel. Linear time algorithm for minimum weight steiner treein graphs with bounded treewidth. Technical Report 632, Technion, Israel Institute ofTechnology, Computer Science Department, Haifa, Israel, 1990.[54] Arie M. C. A. Koster, Stan P. M. van Hoesel, and Antoon W. J. Kolen. Solving partialconstraint satisfaction problems with tree decomposition.
Networks , 40(3):170–180, 2002.
55] Daniel Lokshtanov, D´aniel Marx, and Saket Saurabh. Known algorithms on graphsof bounded treewidth are probably optimal.
The Computing Research Repository ,abs/1007.5450, 2010.[56] Daniel M¨olle, Stefan Richter, and Peter Rossmanith. Enumerate and expand: Improvedalgorithms for connected vertex cover and tree cover.
Theory of Computing Systems ,43(2):234–253, 2008.[57] Jesper Nederlof. Fast polynomial-space algorithms using m¨obius inversion: Improvingon steiner tree and related problems. In Susanne Albers, Alberto Marchetti-Spaccamela,Yossi Matias, Sotiris E. Nikoletseas, and Wolfgang Thomas, editors, , volume 5555of
Lecture Notes in Computer Science , pages 713–725. Springer, 2009.[58] Sang-il Oum and Paul D. Seymour. Approximating clique-width and branch-width.
Journal of Combinatorial Theory, Series B , 96(4):514–528, 2006.[59] Oriana Ponta, Falk H¨uffner, and Rolf Niedermeier. Speeding up dynamic programmingfor some NP-hard graph recoloring problems. In Manindra Agrawal, Ding-Zhu Du, Zhen-hua Duan, and Angsheng Li, editors, , volume 4978 of
Lecture Notes in Computer Sci-ence , pages 490–501. Springer, 2008.[60] Neil Robertson and Paul D. Seymour. Graph minors. II. algorithmic aspects of tree-width.
Journal of Algorithms , 7(3):309–322, 1986.[61] Neil Robertson and Paul D. Seymour. Graph minors. X. obstructions to tree-decomposition.
Journal of Combinatorial Theory, Series B , 52(2):153–190, 1991.[62] Alexander D. Scott and Gregory B. Sorkin. Linear-programming design and analysis offast algorithms for max 2-CSP.
Discrete Optimization , 4(3-4):260–287, 2007.[63] Raimund Seidel. On the all-pairs-shortest-path problem in unweighted undirected graphs.
Journal of Computer and System Sciences , 51(3):400–403, 1995.[64] Paul D. Seymour and Robin Thomas. Call routing and the ratcatcher.
Combinatorica ,14(2):217–241, 1994.[65] Volker Strassen. Gaussian elimination is not optimal.
Numerische Mathematik , 14:354–356, 1969.[66] Jan Arne Telle. Complexity of domination-type problems in graphs.
Nordic Journal ofComputing , 1(1):157–171, 1994.[67] Jan Arne Telle and Andrzej Proskurowski. Algorithms for vertex partitioning problemson partial k -trees. SIAM Journal on Discrete Mathematics , 10(4):529–550, 1997.[68] Dimitrios M. Thilikos, Maria J. Serna, and Hans L. Bodlaender. Cutwidth I: A lineartime fixed parameter algorithm.
Journal of Algorithms , 56(1):1–24, 2005.[69] Leslie G. Valiant. The complexity of computing the permanent.
Theoretical ComputerScience , 8:189–201, 1979.[70] Johan M. M. van Rooij, Jesper Nederlof, and Thomas C. van Dijk. Inclusion/exclusionmeets measure and conquer. In Amos Fiat and Peter Sanders, editors, , volume 5757 of
Lecture Notes in ComputerScience , pages 554–565. Springer, 2009., pages 554–565. Springer, 2009.