FPT-space Graph Kernelizations
FFPT-space Graph Kernelizations
Frank Kammer
THM, University of Applied Sciences Mittelhessen, Giessen, [email protected]
Andrej Sajenko
THM, University of Applied Sciences Mittelhessen, Giessen, [email protected]
Abstract
Let n be the size of a parametrized problem and k the parameter. We present polynomial-timekernelizations for Cluster Editing/Deletion , Path Contractions and
Feedback VertexSet that run with O ( poly ( k ) log n ) bits and compute a kernel of size polynomial in k . By firstexecuting the new kernelizations and subsequently the best known polynomial-time kernelizationsfor the problem under consideration, we obtain the best known kernels in polynomial time with O ( poly ( k ) log n ) bits.Our kernelization for Feedback Vertex Set computes in a first step an approximated solution,which can be used to build a simple algorithm for undirected s - t -connectivity (USTCON) that runsin polynomial time and with O ( poly ( k ) log n ) bits. Mathematics of computing → Graph algorithms
Keywords and phrases undirected s - t -reachability, USTCON, cluster editing, cluster deletion, pathcontraction, feedback vertex set, space-efficient algorithm Funding
Andrej Sajenko : Funded by the DFG – 379157101.
A fixed-parameter tractable problem is a problem whose instances of size n are given withan extra parameter k and can be solved in FPT time, i.e., in time f ( k ) · n O (1) where f issome computable function independent of n . One major field in parameterized complexity isthe kernelization, i.e., an algorithm that applies several rules that simplify an instance ofsize n to an instance, the size of which depends only on k .Traditionally researchers in the field of fixed-parameter tractability only focus on time.However, it is natural to ask the question: without increasing the running time to much, canthe space of such algorithms be also bound by FPT space, i.e., a space bound by O ( f ( k ))computer or O ( f ( k ) log n ) bits where f is some computable function?We call the memory where the input of an algorithm is stored input memory and thememory that an algorithm uses to compute its result working memory. Our model ofcomputation is the read-only word-RAM where we assume a word-size of w = Ω(log N ) bits(with N being the size of the input) and that arithmetic operations (+ , − , · , / , modulo) andbit-shift operations on bit sequences of w bits take constant time. We have constant-timerandom access to the input memory. Another model is the streaming model where the input ispresented as a linear stream of unordered elements and no other access to the input memoryis permitted. It is easy to see that the streaming model is more restricted than the read-onlyword-RAM, allowing us to take over all upper bound results showed in the streaming modelwhile possibly allowing us to break the lower bounds. Fafianie and Kratsch [7] presentedupper and lower bounds for the question above in the streaming model. For an overview ofour results, see Table 1.Algorithms that treat space as a valuable resource are called space-efficient algorithms.There are several space-efficient algorithms published [1, 4, 5, 9, 14, 15, 17]. We can roughly a r X i v : . [ c s . D S ] J u l . Kammer and A. Sajenko 1 Table 1
The results shown by Fafianie and Kratsch are marked with † .problem O (1)-pass streaming word RAM Vertex Cover O ( poly ( k ) log n ) † O ( poly ( k ) log n ) † Cluster Editing Ω( n ) † O ( poly ( k ) log n ) Path Contraction ? O ( poly ( k ) log n ) Feedback Vertex Set Ω( n ) † O ( poly ( k ) log n ) say that space-efficient algorithms are usually slower in practice than their non space-efficientversions, because they need asymptotically more running time or their hidden constant intheir running time is too large. However, if the difference in space between a space-efficientand non space-efficient algorithm is large enough, then less cache-faults occur or less memoryhas to be copied. The time saved here can compensate for a slightly worse running time.For an example see the implementation and tests [16] of a so-called space-efficient subgraphstack [10]. The usually huge difference between k and n makes fixed-parameter algorithmswith FPT-space interesting.In the following we denote by n the number of vertices of the given graph and by k the givenparameter. A usual kernelization uses O ( poly ( n ) log n ) bits of working memory and producesa kernel of O ( f ( k ) log n ) bits where f ( k ) is the kernel size. Our approach is to either modifystandard kernelization rules to bound their working memory to O ( poly ( k ) log n ) bits or findnew ones working with the same bound. These rules do not necessarily produce the smallestknown kernel, but the kernel size is bounded by poly ( k ) and the current best polynomial-time kernelization on that kernel then also performs within our targeted space bound of O ( poly ( k ) log n ) bits. Moreover, our kernelization for Cluster Editing/Deletion and
Path Contraction is almost as fast as the known best kernelization (we have only anextra factor of log k ), i.e., our kernelization is space-efficient.In the next section we give definitions required for this paper and show some auxiliaryalgorithms used afterwards. These algorithms together with our kernelization result for theso-called Feedback Vertex Set problem easily allow us to solve undirected s - t -connectivityin an n -vertex graph that has an unknown feedback vertex set of size k in polynomial timeusing O ( poly ( k ) log n ) bits. Directed and undirected s-t connectivity (STCON and USTCON,resp.) is one of the most essential problems with, e.g., SL is the complexity class of problemsthat are log-space reducible to USTCON. In a celebrated result, Reingold [18] showed alog-space algorithm for USTCON and showed so that SL is equal to the complexity class L ,i.e., the class of decision problems that can be solved using logarithmic space. His algorithmis based on so-called zig-zag products , which are relatively complicated and difficult to analyze.An alternative algorithm ist presented by Rozenman and Vadhan [19] by using a so-called de-randomized squaring algorithm , whose construction is also relatively unnatural. Oursolution is simple, but can be used to obtain a log-space algorithm only in the case wherethe given graph has a unknown feedback vertex set of size O (1).We also want to remark that connectivity is also often solved by a DFS. To the authorsknowledge, a polynomial-time sublinear-space DFS exists only on planar graphs and ongraphs of treewidth O ( n − (cid:15) ) for some (cid:15) > n / ) bits.In Section 3, we describe our kernelizations, where we begin in Section 3.1 with thekernelizations for Cluster Editing/Deletion . The standard kernelizations iterate overall vertex triples and use large matrices to store information for each vertex-pair wheneverthe triple induces a path in the graph (i.e., Θ( n log n ) bits). Instead, we make sure that anedge is “considered” in at most k + 1 induced paths. FPT-space Graph Kernelizations
Furthermore in Section 3.2, we focus on a kernel for
Path Contraction . To find sucha kernel, one usually searches for bridges and contracts the endpoints of the bridge. Insteadof this reduction rule, we only shrink induced paths as long as their length is greater than k + 3 and show that by doing so we get a polynomial-sized kernel.Our main result is our kernelizations for Feedback Vertex Set shown in Section 3.3. Itis our most complicated algorithm. As part of the algorithm, we construct an approximatedfeedback vertex that we can also use for our s - t -connectivity result in Section 2. Manykernelizations for Feedback Vertex Set search repeatedly for a flower , i.e., a set of k + 1cycles that are disjoint except for one common vertex v . Since it is not clear how we candetermine a flower with poly ( k log n ) bits, we first show how to compute a feedback vertexset S of size polynomial in k and with O ( f ( k ) log n ) bits improving an algorithm from Beckerand Geiger [2]. Their algorithms either return a (2 log n )-approximation or need to searchfor cycles where an implementation with few space seems to be complicated. By removing S the graph decomposes in several trees T . Afterwards, we remove trees that are not relevantand shrink the remaining trees to a size polynomial in k . To shrink the trees, we search for aspecial kind of flowers. In the following, an instance of a fixed-parameter tractable problem is described by a tuple(
G, k ) where G is a graph and k is a parameter. We call an instance a yes -instance exactly ifthere is a solution of size max k , otherwise we call it a no -instance. We define the size of agraph as the sum of its vertices and edges. (cid:73) Definition 2.1. (kernel, full kernel) A kernel for a given instance ( G, k ) of a fixed-parametertractable problem is an instance ( G , k ) with the properties that the size of G and k is boundby f ( k ) where f is a computable function (independent of the size of G ) and ( G , k ) containsa solution for the problem exactly if ( G, k ) contains a solution.In case that the kernel contains the vertices of every solution of size k , it is called full . To compute a kernel a kernelization algorithm (or short only kernelization ) is used. Foran easier description, we allow our kernelization to output “no solution”. From that answer,one can easily construct a trivial no-instance. Usually, a kernelization algorithm modifies G step-wise by local modifications described in so-called reduction rules . A reduction rule is safe if it maintains solvability of the given instance. Our kernelizations compute a kernel( G , k ) of an instance ( G, k ) by deleting some vertices and/or edges of G and so obtains G . Furthermore, our kernelizations always allow us to decide which subset S of the deletedvertices or edges have to be part of a solution for ( G, k ) in addition to the vertices or edgesin the solution for ( G , k ).On the read-only word-RAM and with our space bound we can neither modify the inputgraph G nor create a mutable copy of it. Instead we compute the information needed ondemand, and use it to construct a graph G representing the kernel. To store G we use amutable graph structure where a runtime penalty factor logarithmic in the size of the graphhas to be accounted in. Such a graph structure can be realized by using a balanced heap forthe vertices u with modifications. The heap then stores a tuple ( u, p u ) where p u is a pointerto another balanced heap that stores u ’s neighborhood.We now want to show how our approximation of a feedback vertex set can be used on an n -vertex graph with an unknown feedback vertex set of size k = O (1) to solve USTCON inpolynomial time and with O (log n ) bits. As a subroutine, we use several depth first searches . Kammer and A. Sajenko 3 (DFS). A DFS visits all vertices of a connected component of a graph once as follows: startingwith some vertex r it marks r (all other vertices are unmarked) and puts the tuple ( r, r on a stack, which we call DFS stack . Then it repeats untilthe stack is empty: take a tuple ( u, i ) from the stack. If deg ( u ) < i , take the i th neighbor v from u ’s neighborhood and put ( u, i + 1) on the stack. Moreover, if v is not marked, mark it,output it as well as add ( v,
0) on the stack. We then call edge { u, v } a tree edge . All otheredges of the graph are called back edges .A DFS has many applications, one is to compute a so-called DFS-tree. A DFS tree is aspanning tree consisting of all tree edges. We next show that we can run a DFS in a treewith few bits. (cid:73) Lemma 2.2.
Given an n -vertex tree T and a node r of T as root there is a linear-time O (log n ) -bits algorithm that traverses all vertices of T in depth-first-search manner. Proof.
Start the algorithm at r and output it. In an outer loop, iterate over the children u of r and set p := r (define p as predecessor of u ). In an inner loop repeat as long as u = r :Output u . Then determine the index i of vertex p in u ’s neighborhood and take u as thevertex at index ( i + 1) mod deg ( u ) from u ’s neighborhood—here we assume that the indicesrun from 0 to deg ( u ) −
1. Set p := u , u := u and repeat the inner loop, i.e., visit u .Observe that, after the algorithm visits all descendants of a vertex u and returns from p to u , we want to return to the parent v of u . By construction, the index of edge { u, p } is oneless than the index of edge { u, v } at u since we used the edge from u with index one morethan edge { u, v } to visit a first child of u . This allows the algorithm to move to the parentafter it visited all its descendants. The inner loop traverses a whole maximum subtree rootedat r ’s child and the outer loop makes sure that such a traversal is started for every child of r . Every edge is traversed twice (once in each direction) leading to a total running time of O ( n ). The algorithm runs with O (log n ) bits since it has to store only a constant number ofvertices and indices. (cid:74) We next want to describe our algorithm for solving USTCON with a given feedbackvertex set F . Let F ∗ , F , F ⊆ F be initially empty sets. The vertices of F connect trees of G [ V \ F ]. Our approach is to use F ∗ as a set of already reached vertices of F , the set F as the set from which we explore trees to reach new vertices of F , which we store in F . Asketch of the sets is shown in Figure 1.Initially use Lemma 2.2 to determine the set F ⊆ F \ F ∗ of vertices that are adjacent toa vertex in the tree of G [ V \ F ] that contains s . If s ∈ F , take F := { s } .Then for each T of G [ V \ F ] that is adjacent to a vertex of F , run Lemma 2.2 to determinethe set F ⊆ F \ F ∗ of vertices that are adjacent to a vertex of such a tree T . If t is part of F or part of some tree T above, then there exists a path between s and t in G . Otherwiseand if F = ∅ repeat with F ∗ := F ∗ ∪ F , F := F and F = ∅ . (cid:73) Lemma 2.3.
Given an n -vertex m -edge graph G together with a feedback vertex set F ofsize k in G , we can decide USTCON in O ( nm log k ) time and with O ( k log n ) bits. Proof.
It remains to analyze the time and space of the algorithm above. By storing thevertices of F ∗ , F and F in balanced heaps, our algorithm uses O ( k log n ) bits, but accessinga vertex in a heap costs O (log k ) time per access. In the worst case we have to explore eachtree T of G [ V \ F ] for each edge connecting T with a vertex of F . Since we can not store T part of G [ V \ F ], we use G , but skip over edges pointing to a vertex of F . Thus, the runtimeto explore all trees once is O ( n log k ) and the total running time is O ( nm log k ). (cid:74) FPT-space Graph Kernelizations s t
Figure 1
A sketch of our algorithm for solving USTCON with a given feedback vertex set F (non-gray). The figure shows the sets F ∗ (red), F (blue) and F (green) used by the algorithm. Incontrast to the example shown, the gray trees can be large and differently. By Theorem 3.3, we can compute an approximated feedback vertex set of an n -vertex m -edge graph with unknown feedback vertex set of size k in O ( mn log k ) time and O ( k log n )bits. (cid:73) Corollary 2.4.
Given an n -vertex m -edge graph G with an unknown feedback vertex set ofsize k in G , we can decide USTCON in O ( mn log k ) time and with O ( k log n ) bits. To the end of the section, we show that our DFS traversal from the Lemma 2.2 can beused to check if a given graph G is a tree or to return a non-tree edge. (cid:73) Lemma 2.5.
Given an n -vertex graph G and a vertex r , there is an O (log n ) -bits algorithmthat answers in O ( n ) time if the connected component with r in G is a tree or it returns aback-edge that is a non-tree edge in the DFS tree rooted at r . Proof.
We want to use our algorithm of Lemma 2.2. To do so, let us store the first edge e used by the algorithm. As stated in the lemma, the algorithm only works on trees. However,it is not hard to see that we can run the algorithm as long as it does not discover/use a backedge { u, v } to move from u to v where v is not the predecessor of u on the DFS stack, i.e., { v, u } is not a tree edge.Using the invariant that the algorithm has discovered no non-tree back edge, we cancheck this by running a second DFS from r to u . If the check passes, we can go to v and theinvariant holds again. If the algorithm returns to r by using e again, return that G is a tree.The algorithm described uses O (log n ) bits and explores the vertices in the same manneras the algorithm described in Lemma 2.2. However, before taking an edge it has to run thealgorithm Lemma 2.2 to check for a back edge, which can be done in O ( n ) time. Because thealgorithm stops immediately after it finds a back edge, the algorithm considers only O ( n )edges and so the total running time is O ( n ). (cid:74) We start to demonstrate our ideas with an easy kernelization for
Cluster Editing . We now focus on
Cluster Editing , which is formally defined as follows: given a tuple(
G, k ) where G = ( V, E ) is an n -vertex m -edge graph and k is an integer (i.e., the parameter),is it possible to add or delete at most k edges so that the resulting graph becomes a union . Kammer and A. Sajenko 5 of disjoint cliques? Several kernelizations for the problem are known. Gramm et al. [8]presented a kernel with at most 2 k + k vertices and at most 2 k + k edges that can becomputed in O ( n ) time by using several ( n × n ) matrices with entries of Θ(log n ) bits each.Bäcker et al. [3] improved the constants of the kernel size, but also used ( n × n ) matricesto check for unaffordable edge modifications. Both kernelizations above have a reductionrule that removes connected components being cliques. The space consumption of that ruleis not analyzed in both papers. Furthermore, Fafianie and Kratsch [7] showed that t -passstreaming kernel requires at least ( n − / t bits.We next show that k -pass streaming over all graphs G induced by all vertex-tripes allowsus to compute a full kernel by using only poly ( k ) log n bits.We start to describe the details of the first stream and explain the correctness subsequently.Whenever G is a forbidden subgraph , i.e., consists of exactly two edges between vertices { u, v, w } , say edges { u, v } , { v, w } and no edge { u, w } , then allocate initial-zero counters C u,v , C u,v , C u,w , C u,w , C v,w , C v,w —in case that a counter already exists, no allocation of thatcounter is done—and subsequently increment C u,v , C u,w , C v,w by 1 unless one of the countersis already k + 1. In this case, non of the three counters is changed or allocated. Intuitively,a counter C u,v ( C u,v ) stores the number of vertex triples { u, v, w } , . . . , { u, v, w C u,v } whereeither { u, v } has to be deleted (added) from G or another edge in each triangle has to bechanged. Thus, it suffices to store C u,v = k + 1 ( C u,v = k + 1) to know that { u, v } will bedeleted (added). At the end of the first stream over all vertex triples, if we have more than( k + 1) vertex pairs with a non-zero counter, we have a no-instance since adding or deletingone edge { u, v } can resolve the forbidden subgraph for at most k + 1 vertex triples—recallthat this is the maximum number of forbidden subgraphs we counted above. If in the laststream over all vertex triples, no counter reached k + 1, then we skipped over no forbiddensubgraphs. In other words, our counters indicate exactly those vertices that are not part of aconnected component being a clique and we can return the graph induced by all verticeswith a non-zero counter as a kernel.If, in the last stream over all vertex triples, a counter of a vertex pair { u, v } reached k + 1,we modify G by adding and deleting an edge between u and v depending on if C u,v = k + 1and C u,v = k + 1, respectively. If both counters are k + 1, we again have a no-instance. Afterthe change, we stream again over all vertex triples. We then find a kernel or have anotherchange. If we are not done after the k + 1th stream, i.e., after having k required changes, weagain have a no-instance.To store the counters, we use the corresponding edges as keys in a balanced heap. Sincewe have O ( k ) counters, we can store the counters by using O ( k log n ) bits and access themin O (log k ) time. To turn the kernel into a full kernel add the k + 1 forbidden subgraphs ofthe O ( k ) streams. (cid:73) Theorem 3.1.
Given an n -vertex m -edge instance ( G, k ) of Cluster Editing , there isan O ( nm log k ) -time O ( k log n ) -bits kernelization that either outputs a full kernel consistingof ( k + 1) vertices or returns that ( G, k ) is a no-instance. Proof.
Note that k + 1 streams over all vertex triples allow us to compute a full kernel ofsize ( k + 1) for Cluster Editing .We next describe and analyze an implementation of our algorithm above on the read-onlyword RAM. First of all, we do not need to iterate over all vertex triples { u, v, w } . Instead,we can iterate over all edges and, for each edge { u, v } , over all vertices w . Thus, using amutable graph, we can compute a stream in O ( nm log k ) time.Instead of using a complete new stream whenever we add or delete an edge { u, v } to the mutable graph, it suffices to update the counters for all triples { u, v, w } over all FPT-space Graph Kernelizations vertices w . This can be done in O ( n log k ) time. Since we add or delete O ( k ) edges, the2nd to ( k + 1)st streams run in O ( nk log k ) = O ( nm log k ) time. Clearly, our algorithm uses O ( k log n ) bits—note that we can assume without loss of generality that k < m or we havea yes-instance. (cid:74) We finally want to remark that
Cluster Deletion where we are allowed to deleteedges only, but not to add edges, can be solved in a similar way. A full kernel can becomputed almost as described above. The only change is that we return a no-instance for(
G, k ) whenever a counter C u,v = k + 1 for an edge { u, v } . Counters C u,v are still allowed toincrement up to k + 1. Let G be an n -node m -edge graph and C is a subset of edges of G . We write G/C forthe graph obtained from G by contracting each edge in C . Contracting an edge is doneby merging its endpoints and removing any loops or parallel edges afterwards. In the pathcontraction problem, a graph G = ( V, E ) is given and the task is to find a set C ⊆ E suchthat G/C is a path. In the parameterized version, called
Path Contraction , a graph G = ( V, E ) with a parameter k ∈ N is given, and the task is to find a set C ∈ E with | C | ≤ k such that G/C is a path.The currently best kernelization for
Path Contraction runs in linear time and producesa kernel consisting of 3 k + 4 vertices. It is due to Seng and Sun [20], and builds on Heggerneset al.’s algorithm [11]. One reduction rule used in both algorithms is to determine the bridgesthe removal of which would disconnect the graph into components of size at least k + 2. Thereduction rule then merges the endpoints of those bridges.Finding bridges is closely related to the connectivity problem. Instead of computingbridges with our kernelization algorithm, we introduce a new reduction rule below, whichis implemented by a breath-first search (BFS). We shortly sketch a usual BFS and theconstruction of a BFS tree . The BFS visits the vertices of an input graph round-wise. As apreparation of the first round it puts some vertex v into a queue Q and starts a round. Ina round it dequeues every vertex u of Q , and marks u as visited. Moreover, it puts everyunvisited vertex w ∈ N ( u ) into a queue Q . We then say that w was first discovered from u and add the edge { u, w } to an initial empty BFS tree. If Q is empty at the end of the round,the BFS finishes. Otherwise, it proceeds with the next round with Q := Q and Q := ∅ .To present our reduction rules, whose correctness is explained subsequently, we define apath P = v , . . . , v ‘ − ( ‘ ∈ N ) to be clean exactly if its inner vertices v , . . . , v ‘ − are all ofdegree 2. Moreover, we call v a start vertex of P and v ‘ − the end vertex of P . Rule 1 If G contains more than n + k edges or a tree T as a subgraph such that T has morethan k + 2 leaves, output “no-instance”. Rule 2
If there is a clean path P that consists of more than k + 2 vertices, contract all but k + 2 arbitrary edges of P .The first part of Rule 1 corresponds to the following well-known reduction rule for pathcontraction: an n -node graph with more than n + k edges is a no-instance since k + 1 edgescan not be part of an n -vertex path. Concerning the second part, observe: if we have a treewith k + 2 leaves, then a path can connect two leaves, but all other leaves must be contractedwith its parents. Therefore, we can bound the number of leaves in a BFS tree (and thus thesize of each queue Q of the BFS) by k + 2 or we have a no-instance. . Kammer and A. Sajenko 7 We now consider Rule 2. Observe that either all edges of P must be chosen to becontracted or none at all. Moreover, no algorithm can contract all edges of a clean path with ‘ > k + 1 vertices. Thus, contracting an arbitrary edge of P is a safe reduction rule and weget a full kernel.Assume in the following that an exhaustive application of our reduction rules does notreturn “no-instance”. For the time being, let us ignore vertices of degree 2 in our BFS treeby removing them and connecting their two neighbors by an edge. By Rule 1, we have atree with at most k + 2 leaves and thus at most k + 1 edges. Undoing the removal of thedegree-2 vertices, each edge can be replaced by a clean path with at most k + 2 vertices, i.e.,on top of each edge we add at most k − k + 2 + ( k + 1)( k −
1) = k + k + 1.We next sketch our ideas to run the kernelization in O ( n log k ) time by using O ( k log n )bits by basically running a BFS once—see also Fig 2. However, we change the informationon the queue. Instead of storing just vertices v on the BFS queue we store quadruples.Each quadruple ( v, p, i, v ∗ ) consists of the vertex v and its predecessor p if v is not the root.Moreover, let P = ( v , . . . , v i ) be the clean path induced by the successive ancestors of v in the BFS tree that have degree 2. Then i is the number of vertices of P , i.e., v = v i and v ∗ = v k +1 is the ( k + 1)th vertex on P . This way we can easily check both reduction rulesabove. Figure 2
For the sake of the example our modified BFS is applied on the leftmost vertex. Itround-wise counts vertices on a clean path, removes the dotted vertices (those vertices are on a cleanpath and have more than k + 1 predecessors on the path) and connects the neighbors of the removedvertices with an edge (drawn bold). The dashed edges are edges that are not used to discover newvertices. (cid:73) Theorem 3.2.
Given an n -vertex instance ( G, k ) of Path Contraction , there is a O ( n log k ) -time O ( k log n ) -bits kernelization that either outputs a full kernel consisting k − k + 1) vertices and a kernel of k + 4 vertices or returns that ( G, k ) is a no-instance. Proof.
The BFS visits the vertices as usual and updates its quadruples as follows. For eachquadruple ( v, p, i, v ∗ ), it iterates over v ’s neighborhood and puts for every unvisited neighbor v the quadruple ( v , v, , null ) if v is of degree other than two, and otherwise the quadruple( v , v, i + 1 , v ∗∗ ) where v ∗∗ is v if i = k + 1, otherwise v ∗∗ = v ∗ .By Rule 1 we can bound the size of the BFS queue by k + 2 and the size of the kernel by2 k + 3. To ensure Rule 1, we can easily count the number of leaves in the kernel.We now describe how a kernel ( G , k ) can be constructed in adherence to Rule 2. Insteadof contracting arbitrary edges we contract edges at the end of a clean path. The contractionis realized by not copying the inner vertices and edges at the end of a clean path while theBFS traverses the paths and connecting the ( k + 1)st vertex with the last vertex of the pathin the kernel. To avoid adding vertices into the queue that are already visited we maintainthe vertices of the previous and the current queue inside two balanced heaps, respectively.For simplicity, we first ignore a problem that two vertices in the BFS queue may beadjacent (e.g., the BFS starts to explore a clean path from both its borders simultaneously). FPT-space Graph Kernelizations
For each quadruple ( v, p, i, v ∗ ), we add the vertex v into G if i ≤ k + 1 and if additionally p = null , we also add the edge { v, p } into G . If the degree of v in G is not two, then v isthe end of a clean path and we add the edge { v ∗ , v } if i > k + 1 (the bold edges in Fig 2).Since edges between visited vertices also need to be part of the kernel (in Fig. 2 they areshown dashed), we additionally add for every u ∈ N ( v ) with u is already visited, the edge { v, u } into G . To check quickly if u is visited we use our two balanced heaps maintainingthe vertices of the queues.We now consider the case where two vertices v and v on the BFS queue are connectedto each other in G . An example with three connected vertices in the queue is showed at theright part of Figure 2. If such a vertex v is processed, the clean paths end at v and v andwe run the changes as described above. In addition we connect v and v . Moreover, becausea clean path can be explored from both its endpoints, our algorithm then can add up to2( k + 2) vertices of a clean path into the kernel. In fact, we only guarantee the exhaustiveapplication of the following reduction rule instead of Rule 2. Rule 2’
If there is a clean path P that consists of more than 2( k + 2) vertices, contract allbut 2( k + 2) arbitrary edges of P .Thus, we can only conclude to have a no-instance if the kernel size n exceeds 2( k − k + 1)vertices or n − k + 1 edges. Our kernel is still small enough to apply Seng and Sun’slinear-time kernelization within our space bounds. We obtain a kernel of 3 k + 4 vertices.It remains to show the space and time bounds of our kernelization. The queue size isbounded by O ( k ) vertices and thus cannot exceed O ( k log n ) bits. The kernel is bounded by O ( k ) vertices and O ( k ) edges and thus uses O ( k log n ) bits. In total, we use O ( k log n )bits. Concerting the time bound, a standard BFS runs in O ( n + m ) time. By Rule 1 m = O ( n ) or we stop. The algorithm has to check for each vertex if it is in a balanced heapof size at most k + 2, which takes O (log k ) time per vertex. In total we have running time of O ( n log k ). (cid:74) Given a graph G = ( V, E ) a feedback vertex set F ⊆ V is the set of vertices whose removalfrom G turns G into an acyclic graph (also so-called forest). In a parameterized version,called Feedback Vertex Set , a tuple (
G, k ) is given where k is a parameter and we searchfor a feedback vertex set F of size at most k .Iwata showed a kernelization for Feedback Vertex Set that produces a kernel consistingof at most 2 k + k vertices and 4 k edges and runs in O ( k m ) time [12]. Among otherreduction rules he makes use of some reduction rules described below. For the last rule weneed to define a v - flower of order d as a set of d cycles pairwise intersecting exactly on v . Basic Rule 1
Remove a vertex v with deg ( v ) ≤ Basic Rule 2
Remove a vertex v that has only two incident edges { v, u } and { v, w } (possibly u = w ), and add the edge { u, w } . Flower Rule
Remove a vertex v if a v -flower of order k + 1 exists and reduce k by 1.We start to describe our general idea to find a kernel for Feedback Vertex Set with poly ( k ) log n bits. In a first step, we want to find the approximated feedback vertex set X consisting of O ( k ) vertices. By definition, G [ V \ X ] is a forest, i.e., it consists of connectedcomponents that are all trees. In a second step, we analyze the relation between these treesand the vertices of X . This allows us to remove several trees and to shrink the remaining ones. . Kammer and A. Sajenko 9 Moreover, whenever we get the knowledge that a vertex v must be part of every solution, weput v into a set F . Subsequently, we can simplify the instance because every vertex in F with its incident edges can be removed from the given graph. In other words, we consideronly the graph G F = G [ V F = V \ F ] and analyze the relation between components of thegraph G X = G [ V F \ X ] and the vertices of X . We next present our idea with more details.Our implementation is given after the high-level description. Initially set F = ∅ and X = ∅ . Part A (approx. FVS X of max size k + 3 k − ): The idea of our approximationalgorithm can be described with the following steps sketched in Fig. 3: 1) If G has m >n − kn edges, output that ( G, k ) is a no-instance. 2) Take k vertices with the largestdegree into X . 3) Obtain a graph G Y from G X by exhaustively applying Basic Rule 1.4) Add all vertices with degree greater than 2 of G Y into X . 5) Obtain a graph G Z from G X by exhaustively applying Basic Rule 1. Since all vertices of G z have degree two afterwards,every connected component is a cycle. 6) For each vertex v of G Z explore its cycle. Selectthe smallest vertex v of the cycle into the set X —intuitively speaking, we cut the componentat v . If more than k such cycles exist, output that ( G, k ) is a no-instance. Otherwise, proceedwith the next vertex of G Z . (a) FVS of size k = 4 (b) Step 2 (c)
Step 3 (d)
Step 4 (e)
Step 5 (f)
Step 6
Figure 3
A sketch of our algorithm computing an approximated FVS. The figure shows thegraphs before Step i ∈ { , . . . , } . A minimal feedback vertex set is shown in green. The yellowvertices are added to X whereas red vertices are removed by Basic Rules 1 and 2. We next show that the set X obtained by the algorithm is an approximated feedbackvertex set for G . For the analysis we assume to know the optimal feedback vertex set F of ‘ ≤ k vertices in ( G, k ). Each of the ‘ vertices can be connected with each of the n verticesin the graph. Moreover, the removal of F from G removes at most kn edges and, since F isa feedback vertex set, it turns G into a forest with at most n − G has morethan n − kn edges, we deal with a no-instance.After Step 2 the set X consists of k vertices with the largest degree. Assume thatthe vertices x , x , . . . , x k − ∈ X and the vertices v , v , . . . , v ‘ − ∈ F ( ‘ ≤ k ) are ordered by their degree, starting with the largest. We know that deg ( x i ) ≥ deg ( v i ) holds for each i = 0 , . . . , ‘ −
1. We must take into account that some of the vertices of X are connected to eachother, meaning that the removal of x i may reduce the degree of each vertex x j ( j > i ) by one.Since the removal of F turns G into a forest, i.e., m − P ‘ − i =0 deg ( v i ) ≤ n − G X has at most m − P k − i =0 deg ( x i ) ≤ m − P ‘ − i =0 ( deg ( v i )+ i ) = m − P ‘ − i =0 deg ( v i )+ P ‘ − i =0 i ≤ n − k / − k/ G X that is a tree whose root is a cut vertex in G (theonly connection to the rest of G ). We so obtain a graph G Y . Let ‘ be the number of verticesthat are removed by Basic Rule 1 to produce G Y . Then G Y has n = n − k − ‘ vertices thatare all of degree greater than 2 and at most m = n − k / − k/ − ‘ edges. It is nothard to see that G Y has at most 2( m − n ) = k + k − k + k − X . What remains in G Z after Step5 are separate cycles, each requires us to take one arbitrary vertex into X , meaning thatat most k such cycles may exist or we have a no-instance. Hence after applying Step 6 ona yes-instance, we get a feedback vertex set X of size k + ( k + k −
2) + k = k + 3 k − (cid:73) Theorem 3.3.
Given an n -vertex m -edge instance ( G, k ) of Feedback Vertex Set ,there is an O ( mn log k ) -time O ( k log n ) -bits algorithm that either returns a feedback vertexset X consisting of at most k + 3 k − vertices or returns that ( G, k ) is a no-instance. Proof.
We show how to realize the non-trivial steps of the algorithm above. Let V be thevertices of the given graph. In Step 2, we store the k vertices of the largest degree in abalanced heap to answer queries for G X in O (log k ) time by checking for modifications storedin the balanced heap.In contrast to our description of Step 3, we do not explicitly compute G Y since we cannotstore such a graph. Instead we compute the information required on demand. To run Step 4,we iterate over all vertices v , . . . , v n of G . To determine the degree of v i , we iterate over allneighbors u of v i . For each neighbor u , we check with Lemma 2.5 if the connected componentof G [ V \ { v i } ] containing u is a tree. If so, we count the edge. The degree of v i in G Y isequal to the number of edges that we counted.We also do not run Step 5 to remove subtrees from G X . Instead, we need to ignorethese subtrees in Step 6. In an outer loop, iterate over the vertices v , . . . , v n of G , but skipover the vertices in X . We now want to move over a cycle in G Z and select the smallestvertex into X . Let v = v i be the currently visited vertex on the cycle, let z := v be thecurrently smallest vertex found in the cycle yet and p := v be the previously visited vertexon the cycle. In an inner loop repeat until broken: Take the two edges { v, w } and { v, w } for which Lemma 2.5 finds no back edge in the component of G [ V \ X ]. Go to a vertex w ∈ { w , w } \ { p } by updating p := v , v := w . If v = v i break the inner loop, add the vertexwith the smallest ID to X , and continue the outer loop. If it is the case that no vertex w isfound, then the cycle is already broken in a previous step of the outer loop so that we alsocontinue the outer loop. To avoid adding too many vertices in X during Step 6, count theadded vertices and, if necessary, stop and answer “no-instance”.We now focus on the running time and space analysis. Step 1 runs in O (1) time. Step 2runs in O ( n log k ) time: checking for membership, adding, removing and retrieving the largestvertex of X runs in O (log k ) time. This has to be done for up to n elements. Step 3 and 4runs in O ( mn log k ) time: for each vertex the membership in X has to be checked in O (log k )time and, for each edge, Lemma 2.5 has to be called with the adaptation that edges to verticesof X have to be ignored. This sums up to O ( mn log k ) time. Observe that Step 5 and 6have the same running time, which is also the total runtime. . Kammer and A. Sajenko 11 Concerning space, note that we use a constant amount of variables including the algorithmof Lemma 2.5. Thus, O (log n ) bits suffice. The number of vertices in X is bounded by k + 3 k −
2. In total our algorithm uses O ( k log n ) bits. (cid:74) Part B (construct the kernel):
Let us say that a vertex x ∈ X and a tree are in touchonce if x is adjacent to one vertex of the tree. Moreover, we define in touch ( ‘ ∈ N times) if x is adjacent to ‘ vertices of the tree. For an easier description, we start to define a partition T , T and T of the trees in G X as follows: T is set of trees in G X that are in touch at mostonce to at most one x ∈ X and are not in touch to all other x ∈ X . T is the set of trees in G X \ T such that each vertex x ∈ X is in touch at most once to each tree. T is the set ofthe remaining trees in G X . In other words, each tree in T is in touch at least twice to avertex x ∈ X . x (a) Trees in T x x x x x x (b) Trees in T x (c) Trees in T Figure 4
The different kinds a tree can be in touch with a set X = { x , x , x , . . . } . To bound the number of trees in our kernel, we use the Reduction Rules 1 - 3 to removesuperfluous trees and the Reduction Rules 4 - 6 to shrink the trees themselves.
Rule 1
Remove every tree of T . Rule 2
For every pair x , x ∈ X , choose up to k + 2 trees of T that are in touch with x and with x (if there are less, then choose all). Remove all remaining trees of T . Rule 3 If x ∈ X is in touch with at least k + 1 different trees of T , then move x from X into F , reduce k by 1 and restart. Rule 4
If 2 k or more children of a vertex v are the roots of maximal subtrees in G X suchthat each subtree is in touch with the same vertex x ∈ X , then take v into X and thepair { v, x } into an initial empty set Y . If a vertex w occurs in more than k + 1 pairs of Y ,then move w from X into F , reduce k by 1, and restart. Otherwise and if | Y | ≥ k − Rule 5
If Rule 4 does not apply and a tree T of G X is in touch at least 2 k ( k + 1) times withthe same vertex x ∈ X , then move x from X into F , reduce k by 1 and restart. Rule 6
Apply Basic Rule 1 and 2 on vertices that are not connected to a vertex of X . (cid:73) Lemma 3.4.
Rules 1 – 6 are safe.
Proof.
Rule 1 is clearly safe since no vertex of a tree being in touch at most once with atmost one vertex of X can be part of a cycle. We next argue that Rule 2 is safe. Assumethat vertices x , x ∈ X are in touch with a tree of T . If another tree exists that is in touchwith the same vertices, we have a cycle. If ‘ ≥ k + 2 such trees exist, then we have ‘ disjointpaths from x to x and every feedback vertex set must contain x or x . If we now removea tree from T , then all pairs of vertices that are in touch with the tree have already “enough” common trees. Let X ⊆ X be the vertices that are in touch with the removed tree. Then allexcept one vertex of X are removed in every solution and thus no cycle is passing throughthe removed tree. Therefore, Rule 2 is safe. We next focus on Rule 3. By definition x ∈ X forms a cycle with each tree being in touch with x at least twice. Thus, if the rule applies,we have a ( k + 1)-flower and are safe.For Rule 4, note that at least 2 k internally vertex-disjoint paths exist between vertex x and v . Thus, any feedback vertex set must contain x or v . Consider two pairs { v, x } , { v , x } ∈ Y and assume that the latter pair was added after the first pair into Y . Then all 2 k pathsbetween v and x are in the same subtree of G [ V \ ( X \ { v , x } )]. This means that at mostone of the 2 k paths between v and x has common vertices with the 2 k paths between v and x .If we ignore that path, then all remaining paths between v and x as well as between v and x are pairwise internally vertex disjoint. Assume that we have pairs { v , w } , . . . , { v k +1 , w } and w is moved into F . If we remove one path between all pairs except the last pair, thenthe paths are internally vertex disjoint to the paths between the last pair. Furthermore,we have to remove one other path between all pairs except the last two pairs to make thepaths internally disjoint to the paths between the second last pair, etc. Altogether, we find2 k − ( k −
1) = k + 1 paths between all pairs that are internally vertex disjoint. Therefore, wehave a flower of degree k + 1 and it is correct to add w into F . Otherwise and if | Y | ≥ k − k − / ( k −
1) = k + 1 pairs such that the vertices in all pairs are pairwise disjoint.Again, we have k + 1 paths between all pairs such that these paths are internally vertexdisjoint. Thus, one vertex of each of the k + 1 pairs must be in a solution and we have ano-instance.We next consider Rule 5. Since Rule 4 can not be applied, at most 2 k − v can be the root of a maximal subtree in T that are in touch with a vertexof X . Let U be a set of the vertices of T that are adjacent to x . Ignoring the parts of T thatare not on a path between u, u ∈ U , we obtain a tree with maximum degree ∆ = 2 k − T = ( V T , E T ) with maximum degree ∆ and U ⊆ V T , wecan find b| U | / (∆ + 1) c pairs of vertices in U such that the paths in T between each pair arevertex disjoint . This means that we have an x -flower of order | U | / (2 k ) = k + 1 and Rule 5is safe. Clearly, also Rule 6 is safe. (cid:74) We next want to show that we get a kernel of size O ( k ) if no rule applies any more.Clearly, |T | = 0 by Rule 1. After Rule 2, |T | ≤ | X | ( k + 2) holds because we have lessthan | X | pairs, each have chosen at most k + 2 trees. Moreover, Rule 3 guarantees usthat |T | ≤ | X | k since every x is connected to at most k trees of T . In total, we have | X | ( k + 2) + | X | k trees left in G X .By Rule 5, each tree T is in touch less than 2 k ( k + 1) times with each x ∈ X . Thus,less than 2 k ( k + 1) | X | vertices of T are connected to vertices of X . By Rule 6, T shrinksto at most 2 k ( k + 1) | X | leaves and at most 4 k ( k + 1) | X | vertices in total. Thus, we have( | X | ( k + 2) + | X | k )(4 k ( k + 1) | X | ) vertices in all trees, | X | = k + 3 k −
2, and get a kernelwith at most vertices and ( | X | ( k + 2) + | X | k )(2 k ( k + 1) | X | ) = O ( k ) edges.After computing such a kernel it remains to apply the currently best kernelizationalgorithm for Feedback Vertex Set on ( G , k ) to reduce the kernel size to 2 k + k vertices. (cid:73) Theorem 3.5.
Given an n -vertex instance ( G, k ) of Feedback Vertex Set , there isan O ( mn log k ) -time O ( k log n ) -bits kernelization that either outputs a kernel consisting of k + k vertices or returns that ( G, k ) is a no-instance. For a proof see, e.g., Erlebach et al. [6, Lemma 2.4]. . Kammer and A. Sajenko 13
Proof.
Even if we restart the computation several times we need to compute an approximatedfeedback vertex set X only once. Thus by Theorem 3.3, this can be done in O ( mn log k )time, which is the bottleneck in our runtime.We heavily make use of Lemma 2.2. We want to run it on G X , but only have G , F and X . So we modify the algorithm of Lemma 2.2 to skip over all edges that go to a vertexof F ∪ X .Next, we apply Rules 1 - 3 to the trees in G X to bound their number in the kernel.Due to our space bound we cannot store the selected trees, because they may have toomany vertices. Instead we select a unique vertex of a tree as its representative , namely thesmallest that is connected to a vertex of X . To store the relevant trees of T we create a map M where the keys are tuples ( x , x ) ( x , x ∈ X ) and the values are initially empty setscontaining representatives of relevant T trees. To store the relevant trees of T we create amap M where the keys are vertices of X and the values are initially empty sets containingrepresentatives of relevant T trees. We use the following auxiliary structure to determinethe type of a tree in G X . Let S be an initially empty multiset that is able to report thenumber of times an element x ∈ X was added to S .We (re)start the algorithm with empty maps M and M . Iterate over each vertex x i = x , . . . ∈ X in an outer loop and over each neighbor w of x i in an inner loop. Initializean empty multiset S (if one already exists, delete it). Traverse the tree of G X with w asits root and whenever a vertex of the tree is neighbored to a vertex x ∈ X add x into S .Moreover, determine the smallest vertex z of the tree that is connected to a vertex of X .After the traversal over the tree that z represents is done we decide its type. If | S | ≤
1, then z represents a tree of T and thus we continue the iteration of the outer loop with the vertex x i +1 . (We realize Rule 1 by ignoring such trees.) If none of the elements in S was addedmore then one time, the tree belongs to T and we add z into M ( x i , x ) for each x ∈ S if | M ( x i , x ) | < k + 2. (We so realize Rule 2.) If x i ∈ S was added at least two times into S ,then the visited tree belongs to T . Therefore, we add z into M ( x ). After the inner loop isdone we realize Rule 3 as follows: if | M ( x i ) | ≥ k + 1, remove x i from X , put x i into F , set k := k − M and M contain the representatives of all therelevant trees. Note that applications of the Rules 4 – 6 do not introduce new trees (unlesswe have a restart), i.e., there is no need to check for Rules 1 – 3 again. Note furthermorethat only the trees stored in M are relevant for the Rules 4 and 5.Rule 4 is realized by an outer loop over each x ∈ X and then in an inner loop overeach vertex in G X . First determine the representative r of the subtree with v and check if r ∈ M . If so, count the number of subtrees that are in touch with x . Both can be donewith an application of Lemma 2.2. After the inner loop we can easily realize all steps ofRule 4 applied to vertex v . For Rule 5, iterate over all x ∈ X in an outer loop and overall z ∈ M ( x ). After running the algorithm of Lemma 2.2 from z , we can easily test thecondition of Rule 5 and run the required changes.If none of the rules above apply, we construct a kernel by skipping over all verticesthat are deleted by Rule 6. Let G be an initially empty mutable graph. Iterate over allrepresentatives z stored in M and M . By definition z is connected to a vertex of X andthus we add it to G . Traverse the tree represented by z with z as its root. Before visiting anew vertex w , check with Lemma 2.2 if the subtree below w is in touch with some x ∈ X . Ifnot, skip over w . Add each non-skipped edge as well as their endpoints into G . Let { u, v } be the parent edge of v . If the vertex v is of degree two in G and not connected to X in G ,then remove v from G , connect u and w by an edge and continue with the edges of vertex w . After obtaining G execute the kernelization of Iwata [12] to get a kernel of size 2 k + k .The space requirements of the described algorithm determined by our map M , whichhas O ( | X | k ) = O ( k ) entries, thus consists of O ( k log n ) bits. The same space bound holdsfor the graph G .Before we analyze the running time, note that the space bound above allows us to runthe outer loops in parallel. In other words, we can use an array A : X → N and wheneverwe traverse a tree with Lemma 2.2, we increment A [ x ] for each edge to some x ∈ X . Withthis modification, Rule 2 and 3 together as well as Rule 4 run the algorithm of Lemma 2.2once for each vertex of G X , and Rule 5 and 6 run at most once. Thus, our runtime is O ( n )times the runtime of Lemma 2.2, i.e., in total O ( n ). We can restart the algorithm at most k times, therefore O ( kn ) = O ( mn log k ) time suffice. (cid:74) References Tetsuo Asano, Taisuke Izumi, Masashi Kiyomi, Matsuo Konagaya, Hirotaka Ono, Yota Otachi,Pascal Schweitzer, Jun Tarui, and Ryuhei Uehara. Depth-first search using o(n) bits. In
Proc.25th International Symposium on Algorithms and Computation (ISAAC 2014) , volume 8889of
LNCS , pages 553–564. Springer, 2014. doi:10.1007/978-3-319-13075-0_44 . Ann Becker and Dan Geiger. Approximation algorithms for the loop cutset problem. In
Proc.of the Tenth Annual Conference on Uncertainty in Artificial Intelligence (UAI 1994) , pages60–68. Morgan Kaufmann, 1994. Sebastian Böcker, Sebastian Briesemeister, Quang Bao Anh Bui, and Anke Truß. Goingweighted: Parameterized algorithms for cluster editing.
Theor. Comput. Sci. , 410(52):5467–5480, 2009. doi:10.1016/j.tcs.2009.05.006 . Sankardeep Chakraborty, Kunihiko Sadakane, and Srinivasa Rao Satti. Optimal in-placealgorithms for basic graph problems. In
Proc. 31st International Workshop on CombinatorialAlgorithms (IWOCA 2020) , volume 12126 of
Lecture Notes in Computer Science , pages 126–139.Springer, 2020. doi:10.1007/978-3-030-48966-3\_10 . Amr Elmasry, Torben Hagerup, and Frank Kammer. Space-efficient basic graph algorithms.In ,volume 30 of
LIPIcs , pages 288–301. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2015. doi:10.4230/LIPIcs.STACS.2015.288 . Thomas Erlebach, Frank Kammer, Kelin Luo, Andrej Sajenko, and Jakob T. Spooner. Twomoves per time step make a difference. In
Proc. 46th International Colloquium on Automata,Languages, and Programming, (ICALP 2019) , volume 132 of
LIPIcs , pages 141:1–141:14.Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ICALP.2019.141 . Stefan Fafianie and Stefan Kratsch. Streaming kernelization. In
Proc. 39th InternationalSymposium on Mathematical Foundations of Computer Science 2014 (MFCS 2014), Part II ,volume 8635 of
LNCS , pages 275–286. Springer, 2014. doi:10.1007/978-3-662-44465-8\_24 . Jens Gramm, Jiong Guo, Falk Hüffner, and Rolf Niedermeier. Graph-modeled data clustering:Fixed-parameter algorithms for clique generation. In
Proc. 5th Italian Conference on Algorithmsand Complexity (CIAC 2003) , volume 2653 of
LNCS , pages 108–119. Springer, 2003. doi:10.1007/3-540-44849-7\_17 . Torben Hagerup. Highly succinct dynamic data structures. In
Proc. 22nd InternationalSymposium on Fundamentals of Computation Theory (FCT 2019) , volume 11651 of
LNCS ,pages 29–45. Springer, 2019. doi:10.1007/978-3-030-25027-0\_3 . Torben Hagerup, Frank Kammer, and Moritz Laudahn. Space-efficient Euler partition andbipartite edge coloring.
Theor. Comput. Sci. , 754:16–34, 2019. Pinar Heggernes, Pim van ’t Hof, Benjamin Lévêque, Daniel Lokshtanov, and ChristophePaul. Contracting graphs to paths and trees.
Algorithmica , 68(1):109–132, 2014. doi:10.1007/s00453-012-9670-2 . . Kammer and A. Sajenko 15 Yoichi Iwata. Linear-time kernelization for feedback vertex set. In
Proc. 44th InternationalColloquium on Automata, Languages, and Programming, (ICALP 2017) , volume 80 of
LIPIcs ,pages 68:1–68:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.ICALP.2017.68 . Taisuke Izumi and Yota Otachi. Sublinear-space lexicographic depth-first search for boundedtreewidth graphs and planar graphs. In , LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik,2020. To appear. Frank Kammer, Johannes Meintrup, and Andrej Sajenko. Space-efficient vertex separators fortreewidth.
CoRR , abs/1907.00676, 2019. arXiv:1907.00676 . Frank Kammer and Andrej Sajenko. Simple 2 f -color choice dictionaries. In Proc. 29thInternational Symposium on Algorithms and Computation (ISAAC 2018) , volume 123 of
LIPIcs , pages 66:1–66:12. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2018. doi:10.4230/LIPIcs.ISAAC.2018.66 . Frank Kammer and Andrej Sajenko. Space efficient (graph) algorithms. https://github.com/thm-mni-ii/sea , 2018. Frank Kammer and Andrej Sajenko. Sorting and ranking of self-delimiting numbers withapplications to tree isomorphism, 2020. arXiv:2002.07287 . Omer Reingold. Undirected connectivity in log-space.
J. ACM , 55(4):17:1–17:24, Septem-ber 2008. URL: http://doi.acm.org/10.1145/1391289.1391291 , doi:10.1145/1391289.1391291 . Eyal Rozenman and Salil P. Vadhan. Derandomized squaring of graphs. In
Proc. 8th Interna-tional Workshop on Approximation Algorithms for Combinatorial Optimization Problems (AP-PROX 2005) and 9th InternationalWorkshop on Randomization and Computation (RANDOM2005) , volume 3624 of
LNCS , pages 436–447. Springer, 2005. doi:10.1007/11538462\_37 . Bin Sheng and Yuefang Sun. An improved linear kernel for the cycle contraction problem.
Inf.Process. Lett. , 149:14–18, 2019. doi:10.1016/j.ipl.2019.05.003doi:10.1016/j.ipl.2019.05.003