Featured Researches

Data Structures And Algorithms

Gerrymandering on graphs: Computational complexity and parameterized algorithms

Partitioning a region into districts to favor a particular candidate or a party is commonly known as gerrymandering. In this paper, we investigate the gerrymandering problem in graph theoretic setting as proposed by Cohen-Zemach et al. [AAMAS 2018]. Our contributions in this article are two-fold, conceptual and computational. We first resolve the open question posed by Ito et al. [AAMAS 2019] about the computational complexity of the problem when the input graph is a path. Next, we propose a generalization of their model, where the input consists of a graph on n vertices representing the set of voters, a set of m candidates C , a weight function w v :C??Z + for each voter v?�V(G) representing the preference of the voter over the candidates, a distinguished candidate p?�C , and a positive integer k . The objective is to decide if one can partition the vertex set into k pairwise disjoint connected sets (districts) s.t p wins more districts than any other candidate. The problem is known to be NPC even if k=2 , m=2 , and G is either a complete bipartite graph (in fact K 2,n ) or a complete graph. This means that in search for FPT algorithms we need to either focus on the parameter n , or subclasses of forest. Circumventing these intractable results, we give a deterministic and a randomized algorithms for the problem on paths running in times 2.619 k (n+m ) O(1) and 2 k (n+m ) O(1) , respectively. Additionally, we prove that the problem on general graphs is solvable in time 2 n (n+m ) O(1) . Our algorithmic results use sophisticated technical tools such as representative set family and Fast Fourier transform based polynomial multiplication, and their (possibly first) application to problems arising in social choice theory and/or game theory may be of independent interest to the community.

Read more
Data Structures And Algorithms

Graph Drawing via Gradient Descent, (GD ) 2

Readability criteria, such as distance or neighborhood preservation, are often used to optimize node-link representations of graphs to enable the comprehension of the underlying data. With few exceptions, graph drawing algorithms typically optimize one such criterion, usually at the expense of others. We propose a layout approach, Graph Drawing via Gradient Descent, (GD ) 2 , that can handle multiple readability criteria. (GD ) 2 can optimize any criterion that can be described by a smooth function. If the criterion cannot be captured by a smooth function, a non-smooth function for the criterion is combined with another smooth function, or auto-differentiation tools are used for the optimization. Our approach is flexible and can be used to optimize several criteria that have already been considered earlier (e.g., obtaining ideal edge lengths, stress, neighborhood preservation) as well as other criteria which have not yet been explicitly optimized in such fashion (e.g., vertex resolution, angular resolution, aspect ratio). We provide quantitative and qualitative evidence of the effectiveness of (GD ) 2 with experimental data and a functional prototype: \url{this http URL}.

Read more
Data Structures And Algorithms

Graph Spanners by Sketching in Dynamic Streams and the Simultaneous Communication Model

Graph sketching is a powerful technique introduced by the seminal work of Ahn, Guha and McGregor'12 on connectivity in dynamic graph streams that has enjoyed considerable attention in the literature since then, and has led to near optimal dynamic streaming algorithms for many fundamental problems such as connectivity, cut and spectral sparsifiers and matchings. Interestingly, however, the sketching and dynamic streaming complexity of approximating the shortest path metric of a graph is still far from well-understood. Besides a direct k -pass implementation of classical spanner constructions (recently improved to ⌊ k 2 ⌋+1 -passes by Fernandez, Woodruff and Yasuda'20) the state of the art amounts to a O(logk) -pass algorithm of Ahn, Guha and McGregor'12, and a 2 -pass algorithm of Kapralov and Woodruff'14. In particular, no single pass algorithm is known, and the optimal tradeoff between the number of passes, stretch and space complexity is open. In this paper we introduce several new graph sketching techniques for approximating the shortest path metric of the input graph. We give the first {\em single pass} sketching algorithm for constructing graph spanners: we show how to obtain a O ˜ ( n 2 3 ) -spanner using O ˜ (n) space, and in general a O ˜ ( n 2 3 (1−α) ) -spanner using O ˜ ( n 1+α ) space for every α∈[0,1] , a tradeoff that we think may be close optimal. We also give new spanner construction algorithms for any number of passes, simultaneously improving upon all prior work on this problem. Finally, we study the simultaneous communication model and propose the first protocols with low per player information.

Read more
Data Structures And Algorithms

Greedy Approaches to Online Stochastic Matching

Within the context of stochastic probing with commitment, we consider the online stochastic matching problem; that is, the one-sided online bipartite matching problem where edges adjacent to an online node must be probed to determine if they exist based on edge probabilities that become known when an online vertex arrives. If a probed edge exists, it must be used in the matching (if possible). We consider the competitiveness of online algorithms in both the adversarial order model (AOM) and the random order model (ROM). More specifically, we consider a bipartite stochastic graph G=(U,V,E) where U is the set of offline vertices, V is the set of online vertices and G has edge probabilities ( p e ) e∈E and edge weights ( w e ) e∈E . Additionally, G has probing constraints ( C v ) v∈V , where C v indicates which sequences of edges adjacent to an online vertex v can be probed. We assume that U is known in advance, and that C v , together with the edge probabilities and weights adjacent to an online vertex are only revealed when the online vertex arrives. This model generalizes the various settings of the classical bipartite matching problem, and so our main contribution is in making progress towards understanding which classical results extend to the stochastic probing model.

Read more
Data Structures And Algorithms

Grundy Distinguishes Treewidth from Pathwidth

Structural graph parameters, such as treewidth, pathwidth, and clique-width, are a central topic of study in parameterized complexity. A main aim of research in this area is to understand the "price of generality" of these widths: as we transition from more restrictive to more general notions, which are the problems that see their complexity status deteriorate from fixed-parameter tractable to intractable? This type of question is by now very well-studied, but, somewhat strikingly, the algorithmic frontier between the two (arguably) most central width notions, treewidth and pathwidth, is still not understood: currently, no natural graph problem is known to be W-hard for one but FPT for the other. Indeed, a surprising development of the last few years has been the observation that for many of the most paradigmatic problems, their complexities for the two parameters actually coincide exactly, despite the fact that treewidth is a much more general parameter. It would thus appear that the extra generality of treewidth over pathwidth often comes "for free". Our main contribution in this paper is to uncover the first natural example where this generality comes with a high price. We consider Grundy Coloring, a variation of coloring where one seeks to calculate the worst possible coloring that could be assigned to a graph by a greedy First-Fit algorithm. We show that this well-studied problem is FPT parameterized by pathwidth; however, it becomes significantly harder (W[1]-hard) when parameterized by treewidth. Furthermore, we show that Grundy Coloring makes a second complexity jump for more general widths, as it becomes para-NP-hard for clique-width. Hence, Grundy Coloring nicely captures the complexity trade-offs between the three most well-studied parameters. Completing the picture, we show that Grundy Coloring is FPT parameterized by modular-width.

Read more
Data Structures And Algorithms

Hardness of Metric Dimension in Graphs of Constant Treewidth

The Metric Dimension problem asks for a minimum-sized resolving set in a given (unweighted, undirected) graph G . Here, a set S?�V(G) is resolving if no two distinct vertices of G have the same distance vector to S . The complexity of Metric Dimension in graphs of bounded treewidth remained elusive in the past years. Recently, Bonnet and Purohit [IPEC 2019] showed that the problem is W[1]-hard under treewidth parameterization. In this work, we strengthen their lower bound to show that Metric Dimension is NP-hard in graphs of treewidth 24.

Read more
Data Structures And Algorithms

Hierarchical Clustering via Sketches and Hierarchical Correlation Clustering

Recently, Hierarchical Clustering (HC) has been considered through the lens of optimization. In particular, two maximization objectives have been defined. Moseley and Wang defined the \emph{Revenue} objective to handle similarity information given by a weighted graph on the data points (w.l.o.g., [0,1] weights), while Cohen-Addad et al. defined the \emph{Dissimilarity} objective to handle dissimilarity information. In this paper, we prove structural lemmas for both objectives allowing us to convert any HC tree to a tree with constant number of internal nodes while incurring an arbitrarily small loss in each objective. Although the best-known approximations are 0.585 and 0.667 respectively, using our lemmas we obtain approximations arbitrarily close to 1, if not all weights are small (i.e., there exist constants ϵ,δ such that the fraction of weights smaller than δ , is at most 1?��?); such instances encompass many metric-based similarity instances, thereby improving upon prior work. Finally, we introduce Hierarchical Correlation Clustering (HCC) to handle instances that contain similarity and dissimilarity information simultaneously. For HCC, we provide an approximation of 0.4767 and for complementary similarity/dissimilarity weights (analogous to +/??correlation clustering), we again present nearly-optimal approximations.

Read more
Data Structures And Algorithms

High-Performance Parallel Graph Coloring with Strong Guarantees on Work, Depth, and Quality

We develop the first parallel graph coloring heuristics with strong theoretical guarantees on work and depth and coloring quality. The key idea is to design a relaxation of the vertex degeneracy order, a well-known graph theory concept, and to color vertices in the order dictated by this relaxation. This introduces a tunable amount of parallelism into the degeneracy ordering that is otherwise hard to parallelize. This simple idea enables significant benefits in several key aspects of graph coloring. For example, one of our algorithms ensures polylogarithmic depth and a bound on the number of used colors that is superior to all other parallelizable schemes, while maintaining work-efficiency. In addition to provable guarantees, the developed algorithms have competitive run-times for several real-world graphs, while almost always providing superior coloring quality. Our degeneracy ordering relaxation is of separate interest for algorithms outside the context of coloring.

Read more
Data Structures And Algorithms

Hypergraph k -cut for fixed k in deterministic polynomial time

We consider the Hypergraph- k -cut problem. The input consists of a hypergraph G=(V,E) with non-negative hyperedge-costs c:E→ R + and a positive integer k . The objective is to find a least-cost subset F⊆E such that the number of connected components in G−F is at least k . An alternative formulation of the objective is to find a partition of V into k non-empty sets V 1 , V 2 ,…, V k so as to minimize the cost of the hyperedges that cross the partition. Graph- k -cut, the special case of Hypergraph- k -cut obtained by restricting to graph inputs, has received considerable attention. Several different approaches lead to a polynomial-time algorithm for Graph- k -cut when k is fixed, starting with the work of Goldschmidt and Hochbaum (1988). In contrast, it is only recently that a randomized polynomial time algorithm for Hypergraph- k -cut was developed (Chandrasekaran, Xu, Yu, 2018) via a subtle generalization of Karger's random contraction approach for graphs. In this work, we develop the first deterministic polynomial time algorithm for Hypergraph- k -cut for all fixed k . We describe two algorithms both of which are based on a divide and conquer approach. The first algorithm is simpler and runs in n O( k 2 ) time while the second one runs in n O(k) time. Our proof relies on new structural results that allow for efficient recovery of the parts of an optimum k -partition by solving minimum (S,T) -terminal cuts. Our techniques give new insights even for Graph- k -cut.

Read more
Data Structures And Algorithms

Improved 3-pass Algorithm for Counting 4-cycles in Arbitrary Order Streaming

The problem of counting small subgraphs, and specifically cycles, in the streaming model received a lot of attention over the past few years. In this paper, we consider arbitrary order insertion-only streams, improving over the state-of-the-art result on counting 4-cycles. Our algorithm computes a (1+ϵ) -approximation by taking three passes over the stream and using space O( mlogn ϵ 2 T 1/3 ) , where m is the number of edges in the graph and T is the number of 4-cycles.

Read more

Ready to get started?

Join us today