Featured Researches

Computational Complexity

Lower Bounds on Dynamic Programming for Maximum Weight Independent Set

We prove lower bounds on pure dynamic programming algorithms for maximum weight independent set (MWIS). We model such algorithms as tropical circuits, i.e., circuits that compute with max and + operations. For a graph G , an MWIS-circuit of G is a tropical circuit whose inputs correspond to vertices of G and which computes the weight of a maximum weight independent set of G for any assignment of weights to the inputs. We show that if G has treewidth w and maximum degree d , then any MWIS-circuit of G has 2 Ω(w/d) gates and that if G is planar, or more generally H -minor-free for any fixed graph H , then any MWIS-circuit of G has 2 Ω(w) gates. An MWIS-formula is an MWIS-circuit where each gate has fan-out at most one. We show that if G has treedepth t and maximum degree d , then any MWIS-formula of G has 2 Ω(t/d) gates. It follows that treewidth characterizes optimal MWIS-circuits up to polynomials for all bounded degree graphs and H -minor-free graphs, and treedepth characterizes optimal MWIS-formulas up to polynomials for all bounded degree graphs.

Read more
Computational Complexity

Lower Bounds on the Running Time of Two-Way Quantum Finite Automata and Sublogarithmic-Space Quantum Turing Machines

The two-way finite automaton with quantum and classical states (2QCFA), defined by Ambainis and Watrous, is a model of quantum computation whose quantum part is extremely limited; however, as they showed, 2QCFA are surprisingly powerful: a 2QCFA with only a single-qubit can recognize the language L pal ={w∈{a,b } ∗ :w is a palindrome} with bounded error in expected time 2 O(n) , on inputs of length n . We prove that their result essentially cannot be improved upon: a 2QCFA (of any size) cannot recognize L pal with bounded error in expected time 2 o(n) . To our knowledge, this is the first example of a language that can be recognized with bounded error by a 2QCFA in exponential time but not in subexponential time. Moreover, we prove that a quantum Turing machine (QTM) running in space o(logn) and expected time 2 n 1−Ω(1) cannot recognize L pal with bounded error; again, this is the first lower bound of its kind. Far more generally, we establish a lower bound on the running time of any 2QCFA or o(logn) -space QTM that recognizes any language L in terms of a natural "hardness measure" of L . This allows us to exhibit a large family of languages for which we have asymptotically matching lower and upper bounds on the running time of any such 2QCFA or QTM recognizer.

Read more
Computational Complexity

Lower bounds for algebraic machines, semantically

This paper presents a new semantic method for proving lower bounds in computational complexity. We use it to prove that maxflow, a PTIME complete problem, is not computable in polylogarithmic time on parallel random access machines (PRAMs) working with integers, showing that NCZ \neq PTIME, where NCZ is the complexity class defined by such machines, and PTIME is the standard class of polynomial time computable problems (on, say, a Turing machine). On top of showing this new separation result, we show our method captures previous lower bounds results from the literature: Steele and Yao's lower bounds for algebraic decision trees, Ben-Or's lower bounds for algebraic computation trees, Cucker's proof that NC is not equal to PTIME on the reals, and Mulmuley's lower bounds for "PRAMs without bit operations".

Read more
Computational Complexity

Machinery for Proving Sum-of-Squares Lower Bounds on Certification Problems

In this paper, we construct general machinery for proving Sum-of-Squares lower bounds on certification problems by generalizing the techniques used by Barak et al. [FOCS 2016] to prove Sum-of-Squares lower bounds for planted clique. Using this machinery, we prove degree n ϵ Sum-of-Squares lower bounds for tensor PCA, the Wishart model of sparse PCA, and a variant of planted clique which we call planted slightly denser subgraph.

Read more
Computational Complexity

Mad Science is Provably Hard: Puzzles in Hearthstone's Boomsday Lab are NP-hard

We consider the computational complexity of winning this turn (mate-in-1 or "finding lethal") in Hearthstone as well as several other single turn puzzle types introduced in the Boomsday Lab expansion. We consider three natural generalizations of Hearthstone (in which hand size, board size, and deck size scale) and prove the various puzzle types in each generalization NP-hard.

Read more
Computational Complexity

Maximizing coverage while ensuring fairness: a tale of conflicting objective

Ensuring fairness in computational problems has emerged as a key topic during recent years, buoyed by considerations for equitable resource distributions and social justice. It is possible to incorporate fairness in computational problems from several perspectives, such as using optimization, game-theoretic or machine learning frameworks. In this paper we address the problem of incorporation of fairness from a combinatorial optimization perspective. We formulate a combinatorial optimization framework, suitable for analysis by researchers in approximation algorithms and related areas, that incorporates fairness in maximum coverage problems as an interplay between two conflicting objectives. Fairness is imposed in coverage by using coloring constraints that minimizes the discrepancies between number of elements of different colors covered by selected sets; this is in contrast to the usual discrepancy minimization problems studied extensively in the literature where (usually two) colors are not given a priori but need to be selected to minimize the maximum color discrepancy of each individual set. Our main results are a set of randomized and deterministic approximation algorithms that attempts to simultaneously approximate both fairness and coverage in this framework.

Read more
Computational Complexity

Maximum cut on interval graphs of interval count four is NP-complete

The computational complexity of the MaxCut problem restricted to interval graphs has been open since the 80's, being one of the problems proposed by Johnson on his Ongoing Guide to NP-completeness, and has been settled as NP-complete only recently by Adhikary, Bose, Mukherjee and Roy. On the other hand, many flawed proofs of polynomiality for MaxCut on the more restrictive class of proper/unit interval graphs (or graphs with interval count 1) have been presented along the years, and the classification of the problem is still not known. In this paper, we present the first NP-completeness proof for MaxCut when restricted to interval graphs with bounded interval count, namely graphs with interval count 4.

Read more
Computational Complexity

Metric Dimension Parameterized by Treewidth

A resolving set S of a graph G is a subset of its vertices such that no two vertices of G have the same distance vector to S . The Metric Dimension problem asks for a resolving set of minimum size, and in its decision form, a resolving set of size at most some specified integer. This problem is NP-complete, and remains so in very restricted classes of graphs. It is also W[2]-complete with respect to the size of the solution. Metric Dimension has proven elusive on graphs of bounded treewidth. On the algorithmic side, a polytime algorithm is known for trees, and even for outerplanar graphs, but the general case of treewidth at most two is open. On the complexity side, no parameterized hardness is known. This has led several papers on the topic to ask for the parameterized complexity of Metric Dimension with respect to treewidth. We provide a first answer to the question. We show that Metric Dimension parameterized by the treewidth of the input graph is W[1]-hard. More refinedly we prove that, unless the Exponential Time Hypothesis fails, there is no algorithm solving Metric Dimension in time f(pw) n o(pw) on n -vertex graphs of constant degree, with pw the pathwidth of the input graph, and f any computable function. This is in stark contrast with an FPT algorithm of Belmonte et al. [SIAM J. Discrete Math. '17] with respect to the combined parameter tl+Δ , where tl is the tree-length and Δ the maximum-degree of the input graph.

Read more
Computational Complexity

Monochromatic Triangles, Intermediate Matrix Products, and Convolutions

The most studied linear algebraic operation, matrix multiplication, has surprisingly fast O( n ω ) time algorithms for ω<2.373 . On the other hand, the (min,+) matrix product which is at the heart of many fundamental graph problems such as APSP, has received only minor improvements over its brute-force cubic running time and is widely conjectured to require n 3−o(1) time. There is a plethora of matrix products and graph problems whose complexity seems to lie in the middle of these two problems. For instance, the Min-Max matrix product, the Minimum Witness matrix product, APSP in directed unweighted graphs and determining whether an edge-colored graph contains a monochromatic triangle, can all be solved in O ~ ( n (3+ω)/2 ) time. A similar phenomenon occurs for convolution problems, where analogous intermediate problems can be solved in O ~ ( n 1.5 ) time. Can one improve upon the running times for these intermediate problems, in either the matrix product or the convolution world? Or, alternatively, can one relate these problems to each other and to other key problems in a meaningful way? This paper makes progress on these questions by providing a network of fine-grained reductions. We show for instance that APSP in directed unweighted graphs and Minimum Witness product can be reduced to both the Min-Max product and a variant of the monochromatic triangle problem. We also show that a natural convolution variant of monochromatic triangle is fine-grained equivalent to the famous 3SUM problem. As this variant is solvable in O( n 1.5 ) time and 3SUM is in O( n 2 ) time (and is conjectured to require n 2−o(1) time), our result gives the first fine-grained equivalence between natural problems of different running times.

Read more
Computational Complexity

Monochromatic Triangles, Triangle Listing and APSP

One of the main hypotheses in fine-grained complexity is that All-Pairs Shortest Paths (APSP) for n -node graphs requires n 3−o(1) time. Another famous hypothesis is that the 3 SUM problem for n integers requires n 2−o(1) time. Although there are no direct reductions between 3 SUM and APSP, it is known that they are related: there is a problem, (min,+) -convolution that reduces in a fine-grained way to both, and a problem Exact Triangle that both fine-grained reduce to. In this paper we find more relationships between these two problems and other basic problems. Pătraşcu had shown that under the 3 SUM hypothesis the All-Edges Sparse Triangle problem in m -edge graphs requires m 4/3−o(1) time. The latter problem asks to determine for every edge e , whether e is in a triangle. It is equivalent to the problem of listing m triangles in an m -edge graph where m= O ~ ( n 1.5 ) , and can be solved in O( m 1.41 ) time [Alon et al.'97] with the current matrix multiplication bounds, and in O ~ ( m 4/3 ) time if ω=2 . We show that one can reduce Exact Triangle to All-Edges Sparse Triangle, showing that All-Edges Sparse Triangle (and hence Triangle Listing) requires m 4/3−o(1) time also assuming the APSP hypothesis. This allows us to provide APSP-hardness for many dynamic problems that were previously known to be hard under the 3 SUM hypothesis. We also consider the previously studied All-Edges Monochromatic Triangle problem. Via work of [Lincoln et al.'20], our result on All-Edges Sparse Triangle implies that if the All-Edges Monochromatic Triangle problem has an O( n 2.5−ϵ ) time algorithm for ϵ>0 , then both the APSP and 3 SUM hypotheses are false. We also connect the problem to other ``intermediate'' problems, whose runtimes are between O( n ω ) and O( n 3 ) , such as the Max-Min product problem.

Read more

Ready to get started?

Join us today