Featured Researches

Data Structures And Algorithms

Approximating Two-Stage Stochastic Supplier Problems

The main focus of this paper is radius-based (supplier) clustering in the two-stage stochastic setting with recourse, where the inherent stochasticity of the model comes in the form of a budget constraint. We also explore a number of variants where additional constraints are imposed on the first-stage decisions, specifically matroid and multi-knapsack constraints. Our eventual goal is to provide results for supplier problems in the most general distributional setting, where there is only black-box access to the underlying distribution. To that end, we follow a two-step approach. First, we develop algorithms for a restricted version of each problem, in which all possible scenarios are explicitly provided; second, we employ a novel \emph{scenario-discarding} variant of the standard \emph{Sample Average Approximation (SAA)} method, in which we crucially exploit properties of the restricted-case algorithms. We finally note that the scenario-discarding modification to the SAA method is necessary in order to optimize over the radius.

Read more
Data Structures And Algorithms

Approximating pathwidth for graphs of small treewidth

We describe a polynomial-time algorithm which, given a graph G with treewidth t , approximates the pathwidth of G to within a ratio of O(t logt − − − − √ ) . This is the first algorithm to achieve an f(t) -approximation for some function f . Our approach builds on the following key insight: every graph with large pathwidth has large treewidth or contains a subdivision of a large complete binary tree. Specifically, we show that every graph with pathwidth at least th+2 has treewidth at least t or contains a subdivision of a complete binary tree of height h+1 . The bound th+2 is best possible up to a multiplicative constant. This result was motivated by, and implies (with c=2 ), the following conjecture of Kawarabayashi and Rossman (SODA'18): there exists a universal constant c such that every graph with pathwidth Ω( k c ) has treewidth at least k or contains a subdivision of a complete binary tree of height k . Our main technical algorithm takes a graph G and some (not necessarily optimal) tree decomposition of G of width t ′ in the input, and it computes in polynomial time an integer h , a certificate that G has pathwidth at least h , and a path decomposition of G of width at most ( t ′ +1)h+1 . The certificate is closely related to (and implies) the existence of a subdivision of a complete binary tree of height h . The approximation algorithm for pathwidth is then obtained by combining this algorithm with the approximation algorithm of Feige, Hajiaghayi, and Lee (STOC'05) for treewidth.

Read more
Data Structures And Algorithms

Approximating the Log-Partition Function

Variational approximation, such as mean-field (MF) and tree-reweighted (TRW), provide a computationally efficient approximation of the log-partition function for a generic graphical model. TRW provably provides an upper bound, but the approximation ratio is generally not quantified. As the primary contribution of this work, we provide an approach to quantify the approximation ratio through the property of the underlying graph structure. Specifically, we argue that (a variant of) TRW produces an estimate that is within factor 1 κ(G) ??of the true log-partition function for any discrete pairwise graphical model over graph G , where κ(G)??0,1] captures how far G is from tree structure with κ(G)=1 for trees and 2/N for the complete graph over N vertices. As a consequence, the approximation ratio is 1 for trees, (d+1)/2 ??????????????????for any graph with maximum average degree d , and ??β?��? 1+1/(2β) for graphs with girth (shortest cycle) at least βlogN . In general, κ(G) is the solution of a max-min problem associated with G that can be evaluated in polynomial time for any graph. Using samples from the uniform distribution over the spanning trees of G, we provide a near linear-time variant that achieves an approximation ratio equal to the inverse of square-root of minimal (across edges) effective resistance of the graph. We connect our results to the graph partition-based approximation method and thus provide a unified perspective. Keywords: variational inference, log-partition function, spanning tree polytope, minimum effective resistance, min-max spanning tree, local inference

Read more
Data Structures And Algorithms

Approximation Algorithms for Generalized Multidimensional Knapsack

We study a generalization of the knapsack problem with geometric and vector constraints. The input is a set of rectangular items, each with an associated profit and d nonnegative weights ( d -dimensional vector), and a square knapsack. The goal is to find a non-overlapping axis-parallel packing of a subset of items into the given knapsack such that the vector constraints are not violated, i.e., the sum of weights of all the packed items in any of the d dimensions does not exceed one. We consider two variants of the problem: (i) the items are not allowed to be rotated, (ii) items can be rotated by 90 degrees. We give a (2+ϵ) -approximation algorithm for this problem (both versions). In the process, we also study a variant of the maximum generalized assignment problem (Max-GAP), called Vector-Max-GAP, and design a PTAS for it.

Read more
Data Structures And Algorithms

Approximation Algorithms for The Generalized Incremental Knapsack Problem

We introduce and study a discrete multi-period extension of the classical knapsack problem, dubbed generalized incremental knapsack. In this setting, we are given a set of n items, each associated with a non-negative weight, and T time periods with non-decreasing capacities W 1 ≤⋯≤ W T . When item i is inserted at time t , we gain a profit of p it ; however, this item remains in the knapsack for all subsequent periods. The goal is to decide if and when to insert each item, subject to the time-dependent capacity constraints, with the objective of maximizing our total profit. Interestingly, this setting subsumes as special cases a number of recently-studied incremental knapsack problems, all known to be strongly NP-hard. Our first contribution comes in the form of a polynomial-time ( 1 2 −ϵ) -approximation for the generalized incremental knapsack problem. This result is based on a reformulation as a single-machine sequencing problem, which is addressed by blending dynamic programming techniques and the classical Shmoys-Tardos algorithm for the generalized assignment problem. Combined with further enumeration-based self-reinforcing ideas and newly-revealed structural properties of nearly-optimal solutions, we turn our basic algorithm into a quasi-polynomial time approximation scheme (QPTAS). Hence, under widely believed complexity assumptions, this finding rules out the possibility that generalized incremental knapsack is APX-hard.

Read more
Data Structures And Algorithms

Approximation algorithms for car-sharing problems

We consider several variants of a car-sharing problem. Given are a number of requests each consisting of a pick-up location and a drop-off location, a number of cars, and nonnegative, symmetric travel times that satisfy the triangle inequality. Each request needs to be served by a car, which means that a car must first visit the pick-up location of the request, and then visit the drop-off location of the request. Each car can serve two requests. One problem is to serve all requests with the minimum total travel time (called $\CS_{sum}$), and the other problem is to serve all requests with the minimum total latency (called $\CS_{lat}$). We also study the special case where the pick-up and drop-off location of a request coincide. We propose two basic algorithms, called the match and assign algorithm and the transportation algorithm. We show that the best of the resulting two solutions is a 2 -approximation for $\CS_{sum}$ (and a 7/5 -approximation for its special case), and a 5/3 -approximation for $\CS_{lat}$ (and a 3/2 -approximation for its special case); these ratios are better than the ratios of the individual algorithms. Finally, we indicate how our algorithms can be applied to more general settings where each car can serve more than two requests, or where cars have distinct speeds.

Read more
Data Structures And Algorithms

Approximation algorithms for connectivity augmentation problems

In Connectivity Augmentation problems we are given a graph H=(V, E H ) and an edge set E on V , and seek a min-size edge set J⊆E such that H∪J has larger edge/node connectivity than H . In the Edge-Connectivity Augmentation problem we need to increase the edge-connectivity by 1 . In the Block-Tree Augmentation problem H is connected and H∪S should be 2 -connected. In Leaf-to-Leaf Connectivity Augmentation problems every edge in E connects minimal deficient sets. For this version we give a simple combinatorial approximation algorithm with ratio 5/3 , improving the previous 1.91 approximation that applies for the general case. We also show by a simple proof that if the Steiner Tree problem admits approximation ratio α then the general version admits approximation ratio 1+ln(4−x)+ϵ , where x is the solution to the equation 1+ln(4−x)=α+(α−1)x . For the currently best value of α=ln4+ϵ this gives ratio 1.942 . This is slightly worse than the best ratio 1.91 , but has the advantage of using Steiner Tree approximation as a "black box", giving ratio <1.9 if ratio α≤1.35 can be achieved. In the Element Connectivity Augmentation problem we are given a graph G=(V,E) , S⊆V , and connectivity requirements {r(u,v):u,v∈S} . The goal is to find a min-size set J of new edges on S such that for all u,v∈S the graph G∪J contains r(u,v) uv -paths such that no two of them have an edge or a node in V∖S in common. The problem is NP-hard even when max u,v∈S r(u,v)=2 . We obtain approximation ratio 3/2 , improving the previous ratio 7/4 .

Read more
Data Structures And Algorithms

Approximation in (Poly-) Logarithmic Space

We develop new approximation algorithms for classical graph and set problems in the RAM model under space constraints. As one of our main results, we devise an algorithm for d-Hitting Set that runs in time n^{O(d^2 + d/\epsilon})}, uses O((d^2 + d/\epsilon) log n) bits of space, and achieves an approximation ratio of O((d/{\epsilon}) n^{\epsilon}) for any positive \epsilon \leq 1 and any natural number d. In particular, this yields a factor-O(log n) approximation algorithm which runs in time n^{O(log n)} and uses O(log^2 n) bits of space (for constant d). As a corollary, we obtain similar bounds for Vertex Cover and several graph deletion problems. For bounded-multiplicity problem instances, one can do better. We devise a factor-2 approximation algorithm for Vertex Cover on graphs with maximum degree \Delta, and an algorithm for computing maximal independent sets which both run in time n^{O(\Delta)} and use O(\Delta log n) bits of space. For the more general d-Hitting Set problem, we devise a factor-d approximation algorithm which runs in time n^{O(d {\delta}^2)} and uses O(d {\delta}^2 log n) bits of space on set families where each element appears in at most \delta sets. For Independent Set restricted to graphs with average degree d, we give a factor-(2d) approximation algorithm which runs in polynomial time and uses O(log n) bits of space. We also devise a factor-O(d^2) approximation algorithm for Dominating Set on d-degenerate graphs which runs in time n^{O(log n)} and uses O(log^2 n) bits of space. For d-regular graphs, we show how a known randomized factor-O(log d) approximation algorithm can be derandomized to run in time n^{O(1)} and use O(log n) bits of space. Our results use a combination of ideas from the theory of kernelization, distributed algorithms and randomized algorithms.

Read more
Data Structures And Algorithms

Augmented Sparsifiers for Generalized Hypergraph Cuts

In recent years, hypergraph generalizations of many graph cut problems have been introduced and analyzed as a way to better explore and understand complex systems and datasets characterized by multiway relationships. Recent work has made use of a generalized hypergraph cut function which for a hypergraph H=(V,E) can be defined by associating each hyperedge e∈E with a splitting function w e , which assigns a penalty to each way of separating the nodes of e . When each w e is a submodular cardinality-based splitting function, meaning that w e (S)=g(|S|) for some concave function g , previous work has shown that a generalized hypergraph cut problem can be reduced to a directed graph cut problem on an augmented node set. However, existing reduction procedures often result in a dense graph, even when the hypergraph is sparse, which leads to slow runtimes for algorithms that run on the reduced graph. We introduce a new framework of sparsifying hypergraph-to-graph reductions, where a hypergraph cut defined by submodular cardinality-based splitting functions is (1+ε) -approximated by a cut on a directed graph. Our techniques are based on approximating concave functions using piecewise linear curves. For ε>0 we need at most O( ε −1 |e|log|e|) edges to reduce any hyperedge e , which leads to faster runtimes for approximating generalized hypergraph s - t cut problems. For the machine learning heuristic of a clique splitting function, our approach requires only O(|e| ε −1/2 loglog 1 ε ) edges. This sparsification leads to faster approximate min s - t graph cut algorithms for certain classes of co-occurrence graphs. Finally, we apply our sparsification techniques to develop approximation algorithms for minimizing sums of cardinality-based submodular functions.

Read more
Data Structures And Algorithms

Backtracking algorithms for constructing the Hamiltonian decomposition of a 4-regular multigraph

We consider a Hamiltonian decomposition problem of partitioning a regular multigraph into edge-disjoint Hamiltonian cycles. It is known that verifying vertex nonadjacency in the 1-skeleton of the symmetric and asymmetric traveling salesperson polytopes is an NP-complete problem. On the other hand, a sufficient condition for two vertices to be nonadjacent can be formulated as a combinatorial problem of finding a Hamiltonian decomposition of a 4-regular multigraph. We present two backtracking algorithms for verifying vertex nonadjacency in the 1-skeleton of the traveling salesperson polytope and constructing a Hamiltonian decomposition: an algorithm based on a simple path extension and an algorithm based on the chain edge fixing procedure. According to the results of computational experiments for undirected multigraphs, both backtracking algorithms lost to the known general variable neighborhood search algorithm. However, for directed multigraphs, the algorithm based on chain edge fixing showed comparable results with heuristics on instances with the existing solution and better results on instances of the problem where the Hamiltonian decomposition does not exist.

Read more

Ready to get started?

Join us today