Featured Researches

Data Structures And Algorithms

Close relatives of Feedback Vertex Set without single-exponential algorithms parameterized by treewidth

The Cut & Count technique and the rank-based approach have lead to single-exponential FPT algorithms parameterized by treewidth, that is, running in time 2 O(tw) n O(1) , for Feedback Vertex Set and connected versions of the classical graph problems (such as Vertex Cover and Dominating Set). We show that Subset Feedback Vertex Set, Subset Odd Cycle Transversal, Restricted Edge-Subset Feedback Edge Set, Node Multiway Cut, and Multiway Cut are unlikely to have such running times. More precisely, we match algorithms running in time 2 O(twlogtw) n O(1) with tight lower bounds under the Exponential-Time Hypothesis (ETH), ruling out 2 o(twlogtw) n O(1) , where n is the number of vertices and tw is the treewidth of the input graph. Our algorithms extend to the weighted case, while our lower bounds also hold for the larger parameter pathwidth and do not require weights. We also show that, in contrast to Odd Cycle Transversal, there is no 2 o(twlogtw) n O(1) -time algorithm for Even Cycle Transversal under the ETH.

Read more
Data Structures And Algorithms

Clustering under Perturbation Stability in Near-Linear Time

We consider the problem of center-based clustering in low-dimensional Euclidean spaces under the perturbation stability assumption. An instance is α -stable if the underlying optimal clustering continues to remain optimal even when all pairwise distances are arbitrarily perturbed by a factor of at most α . Our main contribution is in presenting efficient exact algorithms for α -stable clustering instances whose running times depend near-linearly on the size of the data set when α≥2+ 3 – √ . For k -center and k -means problems, our algorithms also achieve polynomial dependence on the number of clusters, k , when α≥2+ 3 – √ +ϵ for any constant ϵ>0 in any fixed dimension. For k -median, our algorithms have polynomial dependence on k for α>5 in any fixed dimension; and for α≥2+ 3 – √ in two dimensions. Our algorithms are simple, and only require applying techniques such as local search or dynamic programming to a suitably modified metric space, combined with careful choice of data structures.

Read more
Data Structures And Algorithms

Co-clustering Vertices and Hyperedges via Spectral Hypergraph Partitioning

We propose a novel method to co-cluster the vertices and hyperedges of hypergraphs with edge-dependent vertex weights (EDVWs). In this hypergraph model, the contribution of every vertex to each of its incident hyperedges is represented through an edge-dependent weight, conferring the model higher expressivity than the classical hypergraph. In our method, we leverage random walks with EDVWs to construct a hypergraph Laplacian and use its spectral properties to embed vertices and hyperedges in a common space. We then cluster these embeddings to obtain our proposed co-clustering method, of particular relevance in applications requiring the simultaneous clustering of data entities and features. Numerical experiments using real-world data demonstrate the effectiveness of our proposed approach in comparison with state-of-the-art alternatives.

Read more
Data Structures And Algorithms

Coalgebra Encoding for Efficient Minimization

Recently, we have developed an efficient generic partition refinement algorithm, which computes behavioural equivalence on a state-based system given as an encoded coalgebra, and implemented it in the tool CoPaR. Here we extend this to a fully fledged minimization algorithm and tool by integrating two new aspects: (1) the computation of the transition structure on the minimized state set, and (2) the computation of the reachable part of the given system. In our generic coalgebraic setting these two aspects turn out to be surprisingly non-trivial requiring us to extend the previous theory. In particular, we identify a sufficient condition on encodings of coalgebras, and we show how to augment the existing interface, which encapsulates computations that are specific for the coalgebraic type functor, to make the above extensions possible. Both extensions have linear run time.

Read more
Data Structures And Algorithms

Competitive Analysis for Two Variants of Online Metric Matching Problem

In this paper, we study two variants of the online metric matching problem. The first problem is the online metric matching problem where all the servers are placed at one of two positions in the metric space. We show that a simple greedy algorithm achieves the competitive ratio of 3 and give a matching lower bound. The second problem is the online facility assignment problem on a line, where servers have capacities, servers and requests are placed on 1-dimensional line, and the distances between any two consecutive servers are the same. We show lower bounds 1+ 6 – √ (>3.44948) , 4+ 73 √ 3 (>4.18133) and 13 3 (>4.33333) on the competitive ratio when the numbers of servers are 3, 4 and 5, respectively.

Read more
Data Structures And Algorithms

Complexity of Scheduling Few Types of Jobs on Related and Unrelated Machines

The task of scheduling jobs to machines while minimizing the total makespan, the sum of weighted completion times, or a norm of the load vector, are among the oldest and most fundamental tasks in combinatorial optimization. Since all of these problems are in general NP-hard, much attention has been given to the regime where there is only a small number k of job types, but possibly the number of jobs n is large; this is the few job types, high-multiplicity regime. Despite many positive results, the hardness boundary of this regime was not understood until now. We show that makespan minimization on uniformly related machines ( Q|HM| C max ) is NP-hard already with 6 job types, and that the related Cutting Stock problem is NP-hard already with 8 item types. For the more general unrelated machines model ( R|HM| C max ), we show that if either the largest job size p max , or the number of jobs n are polynomially bounded in the instance size |I| , there are algorithms with complexity |I | poly(k) . Our main result is that this is unlikely to be improved, because Q|| C max is W[1]-hard parameterized by k already when n , p max , and the numbers describing the speeds are polynomial in |I| ; the same holds for R|HM| C max (without speeds) when the job sizes matrix has rank 2 . Our positive and negative results also extend to the objectives ℓ 2 -norm minimization of the load vector and, partially, sum of weighted completion times ∑ w j C j . Along the way, we answer affirmatively the question whether makespan minimization on identical machines ( P|| C max ) is fixed-parameter tractable parameterized by k , extending our understanding of this fundamental problem. Together with our hardness results for Q|| C max this implies that the complexity of P|HM| C max is the only remaining open case.

Read more
Data Structures And Algorithms

Computational phase transitions in sparse planted problems?

In recent times the cavity method, a statistical physics-inspired heuristic, has been successful in conjecturing computational thresholds that have been rigorously confirmed -- such as for community detection in the sparse regime of the stochastic block model. Inspired by this, we investigate the predictions made by the cavity method for the algorithmic problems of detecting and recovering a planted signal in a general model of sparse random graphs. The model we study generalizes the well-understood case of the stochastic block model, the less well understood case of random constraint satisfaction problems with planted assignments, as well as "semi-supervised" variants of these models. Our results include: (i) a conjecture about a precise criterion for when the problems of detection and recovery should be algorithmically tractable arising from a heuristic analysis of when a particular fixed point of the belief propagation algorithm is stable; (ii) a rigorous polynomial-time algorithm for the problem of detection: distinguishing a graph with a planted signal from one without; (iii) a rigorous polynomial-time algorithm for the problem of recovery: outputting a vector that correlates with the planted signal significantly better than a random guess would. The rigorous algorithms are based on the spectra of matrices that arise as the derivatives of the belief propagation update rule. An interesting unanswered question raised is that of obtaining evidence of computational hardness for convex relaxations whenever hardness is predicted by the cavity method.

Read more
Data Structures And Algorithms

Computing L(p,1) -Labeling with Combined Parameters

Given a graph, an L(p,1) -labeling of the graph is an assignment f from the vertex set to the set of nonnegative integers such that for any pair of vertices (u,v),|f(u)−f(v)|≥p if u and v are adjacent, and f(u)≠f(v) if u and v are at distance 2 . The L(p,1) -labeling problem is to minimize the span of f (i.e., max u∈V (f(u))− min u∈V (f(u))+1 ). It is known to be NP-hard even for graphs of maximum degree 3 or graphs with tree-width 2, whereas it is fixed-parameter tractable with respect to vertex cover number. Since vertex cover number is a kind of the strongest parameter, there is a large gap between tractability and intractability from the viewpoint of parameterization. To fill up the gap, in this paper, we propose new fixed-parameter algorithms for L(p,1) -Labeling by the twin cover number plus the maximum clique size and by the tree-width plus the maximum degree. These algorithms reduce the gap in terms of several combinations of parameters.

Read more
Data Structures And Algorithms

Computing Betweenness Centrality in Link Streams

Betweeness centrality is one of the most important concepts in graph analysis. It was recently extended to link streams, a graph generalization where links arrive over time. However, its computation raises non-trivial issues, due in particular to the fact that time is considered as continuous. We provide here the first algorithms to compute this generalized betweenness centrality, as well as several companion algorithms that have their own interest. They work in polynomial time and space, we illustrate them on typical examples, and we provide an implementation.

Read more
Data Structures And Algorithms

Computing Weighted Subset Transversals in H -Free Graphs

For the Odd Cycle Transversal problem, the task is to find a small set S of vertices in a graph that intersects every cycle of odd length. The Subset Odd Cycle Transversal problem requires S to intersect only those odd cycles that include a vertex of a distinguished vertex subset T . If we are given weights for the vertices, we ask instead that S has small weight: this is the problem Weighted Subset Odd Cycle Transversal. We prove an almost-complete complexity dichotomy for Weighted Subset Odd Cycle Transversal for graphs that do not contain a graph H as an induced subgraph. Our general approach can also be used for Weighted Subset Feedback Vertex Set, which enables us to generalize a recent result of Papadopoulos and Tzimas.

Read more

Ready to get started?

Join us today