Featured Researches

Data Structures And Algorithms

2 -blocks in strongly biconnected directed graphs

A directed graph G=(V,E) is called strongly biconnected if G is strongly connected and the underlying graph of G is biconnected. A strongly biconnected component of a strongly connected graph G=(V,E) is a maximal vertex subset L⊆V such that the induced subgraph on L is strongly biconnected. Let G=(V,E) be a strongly biconnected directed graph. A 2 -edge-biconnected block in G is a maximal vertex subset U⊆V such that for any two distict vertices v,w∈U and for each edge b∈E , the vertices v,w are in the same strongly biconnected components of G∖{b} . A 2 -strong-biconnected block in G is a maximal vertex subset U⊆V of size at least 2 such that for every pair of distinct vertices v,w∈U and for every vertex z∈V∖{v,w} , the vertices v and w are in the same strongly biconnected component of G∖{v,w} . In this paper we study 2 -edge-biconnected blocks and 2 -strong biconnected blocks.

Read more
Data Structures And Algorithms

4 vs 7 sparse undirected unweighted Diameter is SETH-hard at time n 4/3

We show, assuming the Strong Exponential Time Hypothesis, that for every ε>0 , approximating undirected unweighted Diameter on n -vertex n 1+o(1) -edge graphs within ratio 7/4?��?requires m 4/3?�o(1) time. This is the first result that conditionally rules out a near-linear time 5/3 -approximation for undirected Diameter.

Read more
Data Structures And Algorithms

A 2 -Approximation Algorithm for Flexible Graph Connectivity

We present a 2 -approximation algorithm for the Flexible Graph Connectivity problem [AHM20] via a reduction to the minimum cost r -out 2 -arborescence problem.

Read more
Data Structures And Algorithms

A 2 O(k) n algorithm for k -cycle in minor-closed graph families

Let C be a proper minor-closed family of graphs. We present a randomized algorithm that given a graph G∈C with n vertices, finds a simple cycle of size k in G (if exists) in 2 O(k) n time. The algorithm applies to both directed and undirected graphs. In previous linear time algorithms for this problem, the runtime dependence on k is super-exponential. The algorithm can be derandomized yielding a 2 O(k) nlogn time algorithm.

Read more
Data Structures And Algorithms

A 2 n/2 -Time Algorithm for n − − √ -SVP and n − − √ -Hermite SVP, and an Improved Time-Approximation Tradeoff for (H)SVP

We show a 2 n/2+o(n) -time algorithm that finds a (non-zero) vector in a lattice L⊂ R n with norm at most O ~ ( n − − √ )⋅min{ λ 1 (L),det(L ) 1/n } , where λ 1 (L) is the length of a shortest non-zero lattice vector and det(L) is the lattice determinant. Minkowski showed that λ 1 (L)≤ n − − √ det(L ) 1/n and that there exist lattices with λ 1 (L)≥Ω( n − − √ )⋅det(L ) 1/n , so that our algorithm finds vectors that are as short as possible relative to the determinant (up to a polylogarithmic factor). The main technical contribution behind this result is new analysis of (a simpler variant of) an algorithm from arXiv:1412.7994, which was only previously known to solve less useful problems. To achieve this, we rely crucially on the ``reverse Minkowski theorem'' (conjectured by Dadush arXiv:1606.06913 and proven by arXiv:1611.05979), which can be thought of as a partial converse to the fact that λ 1 (L)≤ n − − √ det(L ) 1/n . Previously, the fastest known algorithm for finding such a vector was the 2 .802n+o(n) -time algorithm due to [Liu, Wang, Xu, and Zheng, 2011], which actually found a non-zero lattice vector with length O(1)⋅ λ 1 (L) . Though we do not show how to find lattice vectors with this length in time 2 n/2+o(n) , we do show that our algorithm suffices for the most important application of such algorithms: basis reduction. In particular, we show a modified version of Gama and Nguyen's slide-reduction algorithm [Gama and Nguyen, STOC 2008], which can be combined with the algorithm above to improve the time-length tradeoff for shortest-vector algorithms in nearly all regimes, including the regimes relevant to cryptography.

Read more
Data Structures And Algorithms

A 4/3 -Approximation Algorithm for the Minimum 2 -Edge Connected Multisubgraph Problem in the Half-Integral Case

Given a connected undirected graph G ¯ on n vertices, and non-negative edge costs c , the 2ECM problem is that of finding a 2 -edge~connected spanning multisubgraph of G ¯ of minimum cost. The natural linear program (LP) for 2ECM, which coincides with the subtour LP for the Traveling Salesman Problem on the metric closure of G ¯ , gives a lower bound on the optimal cost. For instances where this LP is optimized by a half-integral solution x , Carr and Ravi (1998) showed that the integrality gap is at most 4 3 : they show that the vector 4 3 x dominates a convex combination of incidence vectors of 2 -edge connected spanning multisubgraphs of G ¯ . We present a simpler proof of the result due to Carr and Ravi by applying an extension of Lovász's splitting-off theorem. Our proof naturally leads to a 4 3 -approximation algorithm for half-integral instances. Given a half-integral solution x to the LP for 2ECM, we give an O( n 2 ) -time algorithm to obtain a 2 -edge connected spanning multisubgraph of G ¯ whose cost is at most 4 3 c T x .

Read more
Data Structures And Algorithms

A (Slightly) Improved Approximation Algorithm for Metric TSP

For some ϵ> 10 −36 we give a 3/2−ϵ approximation algorithm for metric TSP.

Read more
Data Structures And Algorithms

A Block-Based Triangle Counting Algorithm on Heterogeneous Environments

Triangle counting is a fundamental building block in graph algorithms. In this paper, we propose a block-based triangle counting algorithm to reduce data movement during both sequential and parallel execution. Our block-based formulation makes the algorithm naturally suitable for heterogeneous architectures. The problem of partitioning the adjacency matrix of a graph is well-studied. Our task decomposition goes one step further: it partitions the set of triangles in the graph. By streaming these small tasks to compute resources, we can solve problems that do not fit on a device. We demonstrate the effectiveness of our approach by providing an implementation on a compute node with multiple sockets, cores and GPUs. The current state-of-the-art in triangle enumeration processes the Friendster graph in 2.1 seconds, not including data copy time between CPU and GPU. Using that metric, our approach is 20 percent faster. When copy times are included, our algorithm takes 3.2 seconds. This is 5.6 times faster than the fastest published CPU-only time.

Read more
Data Structures And Algorithms

A Case for Partitioned Bloom Filters

In a partitioned Bloom Filter the m bit vector is split into k disjoint m/k sized parts, one per hash function. Contrary to hardware designs, where they prevail, software implementations mostly adopt standard Bloom filters, considering partitioned filters slightly worse, due to the slightly larger false positive rate (FPR). In this paper, by performing an in-depth analysis, first we show that the FPR advantage of standard Bloom filters is smaller than thought; more importantly, by studying the per-element FPR, we show that standard Bloom filters have weak spots in the domain: elements which will be tested as false positives much more frequently than expected. This is relevant in scenarios where an element is tested against many filters, e.g., in packet forwarding. Moreover, standard Bloom filters are prone to exhibit extremely weak spots if naive double hashing is used, something occurring in several, even mainstream, libraries. Partitioned Bloom filters exhibit a uniform distribution of the FPR over the domain and are robust to the naive use of double hashing, having no weak spots. Finally, by surveying several usages other than testing set membership, we point out the many advantages of having disjoint parts: they can be individually sampled, extracted, added or retired, leading to superior designs for, e.g., SIMD usage, size reduction, test of set disjointness, or duplicate detection in streams. Partitioned Bloom filters are better, and should replace the standard form, both in general purpose libraries and as the base for novel designs.

Read more
Data Structures And Algorithms

A Data-Structure for Approximate Longest Common Subsequence of A Set of Strings

Given a set of k strings I , their longest common subsequence (LCS) is the string with the maximum length that is a subset of all the strings in I . A data-structure for this problem preprocesses I into a data-structure such that the LCS of a set of query strings Q with the strings of I can be computed faster. Since the problem is NP-hard for arbitrary k , we allow an error that allows some characters to be replaced by other characters. We define the approximation version of the problem with an extra input m , which is the length of the regular expression (regex) that describes the input, and the approximation factor is the logarithm of the number of possibilities in the regex returned by the algorithm, divided by the logarithm regex with the minimum number of possibilities. Then, we use a tree data-structure to achieve sublinear-time LCS queries. We also explain how the idea can be extended to the longest increasing subsequence (LIS) problem.

Read more

Ready to get started?

Join us today