Featured Researches

Computational Complexity

Hard Problems That Quickly Become Very Easy

A graph class is hereditary if it is closed under vertex deletion. We give examples of NP-hard, PSPACE-complete and NEXPTIME-complete problems that become constant-time solvable for every hereditary graph class that is not equal to the class of all graphs.

Read more
Computational Complexity

Hard QBFs for Merge Resolution

We prove the first proof size lower bounds for the proof system Merge Resolution (MRes [Olaf Beyersdorff et al., 2020]), a refutational proof system for prenex quantified Boolean formulas (QBF) with a CNF matrix. Unlike most QBF resolution systems in the literature, proofs in MRes consist of resolution steps together with information on countermodels, which are syntactically stored in the proofs as merge maps. As demonstrated in [Olaf Beyersdorff et al., 2020], this makes MRes quite powerful: it has strategy extraction by design and allows short proofs for formulas which are hard for classical QBF resolution systems. Here we show the first exponential lower bounds for MRes, thereby uncovering limitations of MRes. Technically, the results are either transferred from bounds from circuit complexity (for restricted versions of MRes) or directly obtained by combinatorial arguments (for full MRes). Our results imply that the MRes approach is largely orthogonal to other QBF resolution models such as the QCDCL resolution systems QRes and QURes and the expansion systems ∀ Exp+Res and IR.

Read more
Computational Complexity

Hard properties with (very) short PCPPs and their applications

We show that there exist properties that are maximally hard for testing, while still admitting PCPPs with a proof size very close to linear. Specifically, for every fixed ℓ , we construct a property P (ℓ) ⊆{0,1 } n satisfying the following: Any testing algorithm for P (ℓ) requires Ω(n) many queries, and yet P (ℓ) has a constant query PCPP whose proof size is O(n⋅ log (ℓ) n) , where log (ℓ) denotes the ℓ times iterated log function (e.g., log (2) n=loglogn ). The best previously known upper bound on the PCPP proof size for a maximally hard to test property was O(n⋅polylogn) . As an immediate application, we obtain stronger separations between the standard testing model and both the tolerant testing model and the erasure-resilient testing model: for every fixed ℓ , we construct a property that has a constant-query tester, but requires Ω(n/ log (ℓ) (n)) queries for every tolerant or erasure-resilient tester.

Read more
Computational Complexity

Hard satisfiable formulas for DPLL algorithms using heuristics with small memory

DPLL algorithm for solving the Boolean satisfiability problem (SAT) can be represented in the form of a procedure that, using heuristics A and B , select the variable x from the input formula ? and the value b and runs recursively on the formulas ?[x:=b] and ?[x:=1?�b] . Exponential lower bounds on the running time of DPLL algorithms on unsatisfiable formulas follow from the lower bounds for tree-like resolution proofs. Lower bounds on satisfiable formulas are also known for some classes of DPLL algorithms such as "myopic" and "drunken" algorithms. All lower bounds are made for the classes of DPLL algorithms that limit heuristics access to the formula. In this paper we consider DPLL algorithms with heuristics that have unlimited access to the formula but use small memory. We show that for any pair of heuristics with small memory there exists a family of satisfiable formulas Φ n such that a DPLL algorithm that uses these heuristics runs in exponential time on the formulas Φ n .

Read more
Computational Complexity

Hardness Amplification of Optimization Problems

In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Π is direct product feasible if it is possible to efficiently aggregate any k instances of Π and form one large instance of Π such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Π , our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Π of size n such that every randomized algorithm running in time t(n) fails to solve Π on 1 α(n) fraction of inputs sampled from D , then, assuming some relationships on α(n) and t(n) , there is a distribution D ′ over instances of Π of size O(n⋅α(n)) such that every randomized algorithm running in time t(n) poly(α(n)) fails to solve Π on 99 100 fraction of inputs sampled from D ′ . As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.

Read more
Computational Complexity

Hardness of Approximate Nearest Neighbor Search under L-infinity

We show conditional hardness of Approximate Nearest Neighbor Search (ANN) under the ℓ ∞ norm with two simple reductions. Our first reduction shows that hardness of a special case of the Shortest Vector Problem (SVP), which captures many provably hard instances of SVP, implies a lower bound for ANN with polynomial preprocessing time under the same norm. Combined with a recent quantitative hardness result on SVP under ℓ ∞ (Bennett et al., FOCS 2017), our reduction implies that finding a (1+ε) -approximate nearest neighbor under ℓ ∞ with polynomial preprocessing requires near-linear query time, unless the Strong Exponential Time Hypothesis (SETH) is false. This complements the results of Rubinstein (STOC 2018), who showed hardness of ANN under ℓ 1 , ℓ 2 , and edit distance. Further improving the approximation factor for hardness, we show that, assuming SETH, near-linear query time is required for any approximation factor less than 3 under ℓ ∞ . This shows a conditional separation between ANN under the ℓ 1 / ℓ 2 norm and the ℓ ∞ norm since there are sublinear time algorithms achieving better than 3 -approximation for the ℓ 1 and ℓ 2 norm. Lastly, we show that the approximation factor of 3 is a barrier for any naive gadget reduction from the Orthogonal Vectors problem.

Read more
Computational Complexity

Hardness of Approximation of (Multi-)LCS over Small Alphabet

The problem of finding longest common subsequence (LCS) is one of the fundamental problems in computer science, which finds application in fields such as computational biology, text processing, information retrieval, data compression etc. It is well known that (decision version of) the problem of finding the length of a LCS of an arbitrary number of input sequences (which we refer to as Multi-LCS problem) is NP-complete. Jiang and Li [SICOMP'95] showed that if Max-Clique is hard to approximate within a factor of s then Multi-LCS is also hard to approximate within a factor of Θ(s) . By the NP-hardness of the problem of approximating Max-Clique by Zuckerman [ToC'07], for any constant δ>0 , the length of a LCS of arbitrary number of input sequences of length n each, cannot be approximated within an n 1−δ -factor in polynomial time unless {\tt{P}} = {\NP}. However, the reduction of Jiang and Li assumes the alphabet size to be Ω(n) . So far no hardness result is known for the problem of approximating Multi-LCS over sub-linear sized alphabet. On the other hand, it is easy to get 1/|Σ| -factor approximation for strings of alphabet Σ . In this paper, we make a significant progress towards proving hardness of approximation over small alphabet by showing a polynomial-time reduction from the well-studied \emph{densest k -subgraph} problem with {\em perfect completeness} to approximating Multi-LCS over alphabet of size poly(n/k) . As a consequence, from the known hardness result of densest k -subgraph problem (e.g. [Manurangsi, STOC'17]) we get that no polynomial-time algorithm can give an n −o(1) -factor approximation of Multi-LCS over an alphabet of size n o(1) , unless the Exponential Time Hypothesis is false.

Read more
Computational Complexity

Hardness of Approximation of Euclidean k -Median

The Euclidean k -median problem is defined in the following manner: given a set X of n points in R d , and an integer k , find a set C⊂ R d of k points (called centers) such that the cost function Φ(C,X)≡ ∑ x∈X min c∈C ∥x−c ∥ 2 is minimized. The Euclidean k -means problem is defined similarly by replacing the distance with squared distance in the cost function. Various hardness of approximation results are known for the Euclidean k -means problem. However, no hardness of approximation results were known for the Euclidean k -median problem. In this work, assuming the unique games conjecture (UGC), we provide the first hardness of approximation result for the Euclidean k -median problem. Furthermore, we study the hardness of approximation for the Euclidean k -means/ k -median problems in the bi-criteria setting where an algorithm is allowed to choose more than k centers. That is, bi-criteria approximation algorithms are allowed to output βk centers (for constant β>1 ) and the approximation ratio is computed with respect to the optimal k -means/ k -median cost. In this setting, we show the first hardness of approximation result for the Euclidean k -median problem for any β<1.015 , assuming UGC. We also show a similar bi-criteria hardness of approximation result for the Euclidean k -means problem with a stronger bound of β<1.28 , again assuming UGC.

Read more
Computational Complexity

Hardness of Bichromatic Closest Pair with Jaccard Similarity

Consider collections A and B of red and blue sets, respectively. Bichromatic Closest Pair is the problem of finding a pair from A×B that has similarity higher than a given threshold according to some similarity measure. Our focus here is the classic Jaccard similarity |a∩b|/|a∪b| for (a,b)∈A×B . We consider the approximate version of the problem where we are given thresholds j 1 > j 2 and wish to return a pair from A×B that has Jaccard similarity higher than j 2 if there exists a pair in A×B with Jaccard similarity at least j 1 . The classic locality sensitive hashing (LSH) algorithm of Indyk and Motwani (STOC '98), instantiated with the MinHash LSH function of Broder et al., solves this problem in O ~ ( n 2−δ ) time if j 1 ≥ j 1−δ 2 . In particular, for δ=Ω(1) , the approximation ratio j 1 / j 2 =1/ j δ 2 increases polynomially in 1/ j 2 . In this paper we give a corresponding hardness result. Assuming the Orthogonal Vectors Conjecture (OVC), we show that there cannot be a general solution that solves the Bichromatic Closest Pair problem in O( n 2−Ω(1) ) time for j 1 / j 2 =1/ j o(1) 2 . Specifically, assuming OVC, we prove that for any δ>0 there exists an ε>0 such that Bichromatic Closest Pair with Jaccard similarity requires time Ω( n 2−δ ) for any choice of thresholds j 2 < j 1 <1−δ , that satisfy j 1 ≤ j 1−ε 2 .

Read more
Computational Complexity

Hardness of Bounded Distance Decoding on Lattices in \ell_p Norms

\newcommand{\Z}{\mathbb{Z}} \newcommand{\eps}{\varepsilon} \newcommand{\cc}[1]{\mathsf{#1}} \newcommand{\NP}{\cc{NP}} \newcommand{\problem}[1]{\mathrm{#1}} \newcommand{\BDD}{\problem{BDD}} Bounded Distance Decoding \BDD_{p,\alpha} is the problem of decoding a lattice when the target point is promised to be within an \alpha factor of the minimum distance of the lattice, in the \ell_{p} norm. We prove that \BDD_{p, \alpha} is \NP-hard under randomized reductions where \alpha \to 1/2 as p \to \infty (and for \alpha=1/2 when p=\infty), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for \BDD_{p,\alpha}. For example, we prove that for all p \in [1,\infty) \setminus 2\Z and constants C > 1, \eps > 0, there is no 2^{(1-\eps)n/C}-time algorithm for \BDD_{p,\alpha} for some constant \alpha (which approaches 1/2 as p \to \infty), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for \BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of \BDD_{p,\alpha} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of \alpha for which the problem is known to be \NP-hard for all p > p_1 \approx 4.2773, and give the very first fine-grained hardness for \BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in \ell_{p} norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).

Read more

Ready to get started?

Join us today