Featured Researches

Computational Complexity

Eliminating Intermediate Measurements in Space-Bounded Quantum Computation

A foundational result in the theory of quantum computation known as the "principle of safe storage" shows that it is always possible to take a quantum circuit and produce an equivalent circuit that makes all measurements at the end of the computation. While this procedure is time efficient, meaning that it does not introduce a large overhead in the number of gates, it uses extra ancillary qubits and so is not generally space efficient. It is quite natural to ask whether it is possible to defer measurements to the end of a quantum computation without increasing the number of ancillary qubits. We give an affirmative answer to this question by exhibiting a procedure to eliminate all intermediate measurements that is simultaneously space-efficient and time-efficient. A key component of our approach, which may be of independent interest, involves showing that the well-conditioned versions of many standard linear-algebraic problems may be solved by a quantum computer in less space than seems possible by a classical computer.

Read more
Computational Complexity

Elimination Distances, Blocking Sets, and Kernels for Vertex Cover

The Vertex Cover problem plays an essential role in the study of polynomial kernelization in parameterized complexity, i.e., the study of provable and efficient preprocessing for NP-hard problems. Motivated by the great variety of positive and negative results for kernelization for Vertex Cover subject to different parameters and graph classes, we seek to unify and generalize them using so-called blocking sets, which have played implicit and explicit roles in many results. We show that in the most-studied setting, parameterized by the size of a deletion set to a specified graph class C , bounded minimal blocking set size is necessary but not sufficient to get a polynomial kernelization. Under mild technical assumptions, bounded minimal blocking set size is showed to allow an essentially tight efficient reduction in the number of connected components. We then determine the exact maximum size of minimal blocking sets for graphs of bounded elimination distance to any hereditary class C , including the case of graphs of bounded treedepth. We get similar but not tight bounds for certain non-hereditary classes C , including the class C LP of graphs where integral and fractional vertex cover size coincide. These bounds allow us to derive polynomial kernels for Vertex Cover parameterized by the size of a deletion set to graphs of bounded elimination distance to, e.g., forest, bipartite, or C LP graphs.

Read more
Computational Complexity

Enumerating maximal consistent closed sets in closure systems

Given an implicational base, a well-known representation for a closure system, an inconsistency binary relation over a finite set, we are interested in the problem of enumerating all maximal consistent closed sets (denoted by MCCEnum for short). We show that MCCEnum cannot be solved in output-polynomial time unless P=NP , even for lower bounded lattices. We give an incremental-polynomial time algorithm to solve MCCEnum for closure systems with constant Carathéodory number. Finally we prove that in biatomic atomistic closure systems MCCEnum can be solved in output-quasipolynomial time if minimal generators obey an independence condition, which holds in atomistic modular lattices. For closure systems closed under union (i.e., distributive), MCCEnum has been previously solved by a polynomial delay algorithm.

Read more
Computational Complexity

Envy-free cake cutting: A polynomial number of queries with high probability

In this article we propose a probabilistic framework in order to study the fair division of a divisible good, e.g. a cake, between n players. Our framework follows the same idea than the ''Full independence model'' used in the study of fair division of indivisible goods. We show that, in this framework, there exists an envy-free division algorithm satisfying the following probability estimate: P(C( μ 1 ,…, μ n )≥ n 7+b )=O( n − b−1 3 +1+o(1) ), where μ 1 ,…, μ n correspond to the preferences of the n players, C( μ 1 ,…, μ n ) is the number of queries used by the algorithm and b>4 .In particular, this gives lim n→+∞ P(C( μ 1 ,…, μ n )≥ n 12 )=0. It must be noticed that nowadays few things are known about the complexity of envy-free division algorithms. Indeed, Procaccia has given a lower bound in Ω( n 2 ) and Aziz and Mackenzie have given an upper bound in n n n n n n . As our estimate means that we have C( μ 1 ,…, μ n )< n 12 with a high probability, this gives a new insight on the complexity of envy-free cake cutting algorithms.\\Our result follows from a study of Webb's algorithm and a theorem of Tao and Vu about the smallest singular value of a random matrix.

Read more
Computational Complexity

Equation satisfiability in solvable groups

The study of the complexity of the equation satisfiability problem in finite groups had been initiated by Goldmann and Russell (2002) where they showed that this problem is in polynomial time for nilpotent groups while it is NP-complete for non-solvable groups. Since then, several results have appeared showing that the problem can be solved in polynomial time in certain solvable groups G having a nilpotent normal subgroup H with nilpotent factor G/H . This paper shows that such normal subgroup must exist in each finite group with equation satisfiability solvable in polynomial time, unless the Exponential Time Hypothesis fails.

Read more
Computational Complexity

Equivalences between triangle and range query problems

We define a natural class of range query problems, and prove that all problems within this class have the same time complexity (up to polylogarithmic factors). The equivalence is very general, and even applies to online algorithms. This allows us to obtain new improved algorithms for all of the problems in the class. We then focus on the special case of the problems when the queries are offline and the number of queries is linear. We show that our range query problems are runtime-equivalent (up to polylogarithmic factors) to counting for each edge e in an m -edge graph the number of triangles through e . This natural triangle problem can be solved using the best known triangle counting algorithm, running in O( m 2ω/(ω+1) )≤O( m 1.41 ) time. Moreover, if ω=2 , the O( m 2ω/(ω+1) ) running time is known to be tight (within m o(1) factors) under the 3SUM Hypothesis. In this case, our equivalence settles the complexity of the range query problems. Our problems constitute the first equivalence class with this peculiar running time bound. To better understand the complexity of these problems, we also provide a deeper insight into the family of triangle problems, in particular showing black-box reductions between triangle listing and per-edge triangle detection and counting. As a byproduct of our reductions, we obtain a simple triangle listing algorithm matching the state-of-the-art for all regimes of the number of triangles. We also give some not necessarily tight, but still surprising reductions from variants of matrix products, such as the (min,max) -product.

Read more
Computational Complexity

Ergodic Theorems for PSPACE functions and their converses

We initiate the study of effective pointwise ergodic theorems in resource-bounded settings. Classically, the convergence of the ergodic averages for integrable functions can be arbitrarily slow. In contrast, we show that for a class of PSPACE L1 functions, and a class of PSPACE computable measure-preserving ergodic transformations, the ergodic average exists for all PSPACE randoms and is equal to the space average on every EXP random. We establish a partial converse that PSPACE non-randomness can be characterized as non-convergence of ergodic averages. Further, we prove that there is a class of resource-bounded randoms, viz. SUBEXP-space randoms, on which the corresponding ergodic theorem has an exact converse - a point x is SUBEXP-space random if and only if the corresponding effective ergodic theorem holds for x.

Read more
Computational Complexity

Even faster algorithms for CSAT over~supernilpotent algebras

In this paper two algorithms solving circuit satisfiability problem over supernilpotent algebras are presented. The first one is deterministic and is faster than fastest previous algorithm presented by Aichinger. The second one is probabilistic with linear time complexity. Application of the former algorithm to finite groups provides time complexity that is usually lower than in previously best (given by Földvári) and application of the latter leads to corollary, that circuit satisfiability problem for group G is either tractable in probabilistic linear time if G is nilpotent or is NP-complete if G fails to be nilpotent. The results are obtained, by translating equations between polynomials over supernilpotent algebras to bounded degree polynomial equations over finite fields.

Read more
Computational Complexity

Explicit Designs and Extractors

We give significantly improved explicit constructions of three related pseudorandom objects. 1. Extremal designs: An (n,r,s) -design, or (n,r,s) -partial Steiner system, is an r -uniform hypergraph over n vertices with pairwise hyperedge intersections of size <s . For all constants r≥s∈N with r even, we explicitly construct (n,r,s) -designs ( G n ) n∈N with independence number α( G n )≤O( n 2(r−s) r ) . This gives the first derandomization of a result by Rödl and Šinajová (Random Structures & Algorithms, 1994). 2. Extractors for adversarial sources: By combining our designs with leakage-resilient extractors (Chattopadhyay et al., FOCS 2020), we establish a new, simple framework for extracting from adversarial sources of locality 0 . As a result, we obtain significantly improved low-error extractors for these sources. For any constant δ>0 , we extract from (N,K,n, polylog (n)) -adversarial sources of locality 0 , given just K≥ N δ good sources. The previous best result (Chattopadhyay et al., STOC 2020) required K≥ N 1/2+o(1) . 3. Extractors for small-space sources: Using a known reduction to adversarial sources, we immediately obtain improved low-error extractors for space s sources over n bits that require entropy k≥ n 1/2+δ ⋅ s 1/2−δ , whereas the previous best result (Chattopadhyay et al., STOC 2020) required k≥ n 2/3+δ ⋅ s 1/3−δ . On the other hand, using a new reduction from small-space sources to affine sources, we obtain near-optimal extractors for small-space sources in the polynomial error regime. Our extractors require just k≥s⋅ log C n entropy for some constant C , which is an exponential improvement over the previous best result, which required k≥ s 1.1 ⋅ 2 log 0.51 n (Chattopadhyay and Li, STOC 2016).

Read more
Computational Complexity

Explicit SoS lower bounds from high-dimensional expanders

We construct an explicit family of 3XOR instances which is hard for O( logn − − − − √ ) levels of the Sum-of-Squares hierarchy. In contrast to earlier constructions, which involve a random component, our systems can be constructed explicitly in deterministic polynomial time. Our construction is based on the high-dimensional expanders devised by Lubotzky, Samuels and Vishne, known as LSV complexes or Ramanujan complexes, and our analysis is based on two notions of expansion for these complexes: cosystolic expansion, and a local isoperimetric inequality due to Gromov. Our construction offers an interesting contrast to the recent work of Alev, Jeronimo and the last author~(FOCS 2019). They showed that 3XOR instances in which the variables correspond to vertices in a high-dimensional expander are easy to solve. In contrast, in our instances the variables correspond to the edges of the complex.

Read more

Ready to get started?

Join us today