Featured Researches

Data Structures And Algorithms

Disjoint Shortest Paths with Congestion on DAGs

In the k -Disjoint Shortest Paths problem, a set of source terminal pairs of vertices {( s i , t i )∣1≤i≤k} is given and we are asked to find paths P 1 ,…, P k such that each path P i is a shortest path from s i to t i and every vertex of the graph routes at most one of such paths. We introduce a relaxation of the problem, namely, k -Disjonit Shortest Paths with Congestion- c where every vertex is allowed to route up to c paths. We provide a simple algorithm to solve the k -Disjonit Shortest Paths with Congestion- c problem in time f(k) n O(k−c) on DAGs. Our algorithm is based on the earlier algorithm for k -Disjonit Paths with Congestion- c [IPL 2019, MFCS 2016], but we significantly simplify their argument. We also discuss the hardness of the problem, achieving a better lower bound than the previous one[IPL 2019]. We believe our simplified method of analysis can be helpful to deal with similar problems on general undirected graphs.

Read more
Data Structures And Algorithms

Distributed Algorithms for Matching in Hypergraphs

We study the d -Uniform Hypergraph Matching ( d -UHM) problem: given an n -vertex hypergraph G where every hyperedge is of size d , find a maximum cardinality set of disjoint hyperedges. For d≥3 , the problem of finding the maximum matching is NP-complete, and was one of Karp's 21 NP -complete problems. In this paper we are interested in the problem of finding matchings in hypergraphs in the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In this model, we present the first three parallel algorithms for d -Uniform Hypergraph Matching, and we analyse them in terms of resources such as memory usage, rounds of communication needed, and approximation ratio. The highlights include: ∙ A O(logn) -round d -approximation algorithm that uses O(nd) space per machine. ∙ A 3 -round, O( d 2 ) -approximation algorithm that uses O ~ ( nm − − − √ ) space per machine. ∙ A 3 -round algorithm that computes a subgraph containing a (d−1+ 1 d ) 2 -approximation, using O ~ ( nm − − − √ ) space per machine for linear hypergraphs, and O ~ (n nm − − − √ ) in general.

Read more
Data Structures And Algorithms

Distribution-Free Models of Social Networks

The structure of large-scale social networks has predominantly been articulated using generative models, a form of average-case analysis. This chapter surveys recent proposals of more robust models of such networks. These models posit deterministic and empirically supported combinatorial structure rather than a specific probability distribution. We discuss the formal definitions of these models and how they relate to empirical observations in social networks, as well as the known structural and algorithmic results for the corresponding graph classes.

Read more
Data Structures And Algorithms

Distributional Analysis

In distributional or average-case analysis, the goal is to design an algorithm with good-on-average performance with respect to a specific probability distribution. Distributional analysis can be useful for the study of general-purpose algorithms on "non-pathological" inputs, and for the design of specialized algorithms in applications in which there is detailed understanding of the relevant input distribution. For some problems, however, pure distributional analysis encourages "overfitting" an algorithmic solution to a particular distributional assumption and a more robust analysis framework is called for. This chapter presents numerous examples of the pros and cons of distributional analysis, highlighting some of its greatest hits while also setting the stage for the hybrids of worst- and average-case analysis studied in later chapters.

Read more
Data Structures And Algorithms

Diverse Collections in Matroids and Graphs

We investigate the parameterized complexity of finding diverse sets of solutions to three fundamental combinatorial problems, two from the theory of matroids and the third from graph theory. The input to the Weighted Diverse Bases problem consists of a matroid M , a weight function ?:E(M)?�N , and integers k??,d?? . The task is to decide if there is a collection of k bases B 1 ,?? B k of M such that the weight of the symmetric difference of any pair of these bases is at least d . This is a diverse variant of the classical matroid base packing problem. The input to the Weighted Diverse Common Independent Sets problem consists of two matroids M 1 , M 2 defined on the same ground set E , a weight function ?:E?�N , and integers k??,d?? . The task is to decide if there is a collection of k common independent sets I 1 ,?? I k of M 1 and M 2 such that the weight of the symmetric difference of any pair of these sets is at least d . This is motivated by the classical weighted matroid intersection problem. The input to the Diverse Perfect Matchings problem consists of a graph G and integers k??,d?? . The task is to decide if G contains k perfect matchings M 1 ,?? M k such that the symmetric difference of any two of these matchings is at least d . We show that Weighted Diverse Bases and Weighted Diverse Common Independent Sets are both NP-hard, and derive fixed-parameter tractable (FPT) algorithms for all three problems with (k,d) as the parameter.

Read more
Data Structures And Algorithms

Diverse Pairs of Matchings

We initiate the study of the Diverse Pair of (Maximum/ Perfect) Matchings problems which given a graph G and an integer k , ask whether G has two (maximum/perfect) matchings whose symmetric difference is at least k . Diverse Pair of Matchings (asking for two not necessarily maximum or perfect matchings) is NP-complete on general graphs if k is part of the input, and we consider two restricted variants. First, we show that on bipartite graphs, the problem is polynomial-time solvable, and second we show that Diverse Pair of Maximum Matchings is FPT parameterized by k . We round off the work by showing that Diverse Pair of Matchings has a kernel on O( k 2 ) vertices.

Read more
Data Structures And Algorithms

Dual Half-integrality for Uncrossable Cut Cover and its Application to Maximum Half-Integral Flow

Given an edge weighted graph and a forest F , the 2-edge connectivity augmentation problem is to pick a minimum weighted set of edges, E ′ , such that every connected component of E ′ ∪F is 2-edge connected. Williamson et al. gave a 2-approximation algorithm (WGMV) for this problem using the primal-dual schema. We show that when edge weights are integral, the WGMV procedure can be modified to obtain a half-integral dual. The 2-edge connectivity augmentation problem has an interesting connection to routing flow in graphs where the union of supply and demand is planar. The half-integrality of the dual leads to a tight 2-approximate max-half-integral-flow min-multicut theorem.

Read more
Data Structures And Algorithms

Dynamic Geometric Independent Set

We present fully dynamic approximation algorithms for the Maximum Independent Set problem on several types of geometric objects: intervals on the real line, arbitrary axis-aligned squares in the plane and axis-aligned d -dimensional hypercubes. It is known that a maximum independent set of a collection of n intervals can be found in O(nlogn) time, while it is already \textsf{NP}-hard for a set of unit squares. Moreover, the problem is inapproximable on many important graph families, but admits a \textsf{PTAS} for a set of arbitrary pseudo-disks. Therefore, a fundamental question in computational geometry is whether it is possible to maintain an approximate maximum independent set in a set of dynamic geometric objects, in truly sublinear time per insertion or deletion. In this work, we answer this question in the affirmative for intervals, squares and hypercubes. First, we show that for intervals a (1+ε) -approximate maximum independent set can be maintained with logarithmic worst-case update time. This is achieved by maintaining a locally optimal solution using a constant number of constant-size exchanges per update. We then show how our interval structure can be used to design a data structure for maintaining an expected constant factor approximate maximum independent set of axis-aligned squares in the plane, with polylogarithmic amortized update time. Our approach generalizes to d -dimensional hypercubes, providing a O( 4 d ) -approximation with polylogarithmic update time. Those are the first approximation algorithms for any set of dynamic arbitrary size geometric objects; previous results required bounded size ratios to obtain polylogarithmic update time. Furthermore, it is known that our results for squares (and hypercubes) cannot be improved to a (1+ε) -approximation with the same update time.

Read more
Data Structures And Algorithms

Dynamic Longest Increasing Subsequence and the Erdös-Szekeres Partitioning Problem

In this paper, we provide new approximation algorithms for dynamic variations of the longest increasing subsequence (\textsf{LIS}) problem, and the complementary distance to monotonicity (\textsf{DTM}) problem. In this setting, operations of the following form arrive sequentially: (i) add an element, (ii) remove an element, or (iii) substitute an element for another. At every point in time, the algorithm has an approximation to the longest increasing subsequence (or distance to monotonicity). We present a (1+ϵ) -approximation algorithm for \textsf{DTM} with polylogarithmic worst-case update time and a constant factor approximation algorithm for \textsf{LIS} with worst-case update time O ~ ( n ϵ ) for any constant ϵ>0 .% n in the runtime denotes the size of the array at the time the operation arrives. Our dynamic algorithm for \textsf{LIS} leads to an almost optimal algorithm for the Erdös-Szekeres partitioning problem. Erdös-Szekeres partitioning problem was introduced by Erdös and Szekeres in 1935 and was known to be solvable in time O( n 1.5 logn) . Subsequent work improve the runtime to O( n 1.5 ) only in 1998. Our dynamic \textsf{LIS} algorithm leads to a solution for Erdös-Szekeres partitioning problem with runtime O ~ ϵ ( n 1+ϵ ) for any constant ϵ>0 .

Read more
Data Structures And Algorithms

Dynamic Similarity Search on Integer Sketches

Similarity-preserving hashing is a core technique for fast similarity searches, and it randomly maps data points in a metric space to strings of discrete symbols (i.e., sketches) in the Hamming space. While traditional hashing techniques produce binary sketches, recent ones produce integer sketches for preserving various similarity measures. However, most similarity search methods are designed for binary sketches and inefficient for integer sketches. Moreover, most methods are either inapplicable or inefficient for dynamic datasets, although modern real-world datasets are updated over time. We propose dynamic filter trie (DyFT), a dynamic similarity search method for both binary and integer sketches. An extensive experimental analysis using large real-world datasets shows that DyFT performs superiorly with respect to scalability, time performance, and memory efficiency. For example, on a huge dataset of 216 million data points, DyFT performs a similarity search 6,000 times faster than a state-of-the-art method while reducing to one-thirteenth in memory.

Read more

Ready to get started?

Join us today