Featured Researches

Computational Complexity

A Lower Bound for Polynomial Calculus with Extension Rule

In this paper we study an extension of the Polynomial Calculus proof system where we can introduce new variables and take a square root. We prove that an instance of the subset-sum principle, the binary value principle, requires refutations of exponential bit size over rationals in this system. Part and Tzameret proved an exponential lower bound on the size of Res-Lin (Resolution over linear equations) refutations of the binary value principle. We show that our system p-simulates Res-Lin and thus we get an alternative exponential lower bound for the size of Res-Lin refutations of the binary value principle.

Read more
Computational Complexity

A Lower Bound on Determinantal Complexity

The determinantal complexity of a polynomial P∈F[ x 1 ,…, x n ] over a field F is the dimension of the smallest matrix M whose entries are affine functions in F[ x 1 ,…, x n ] such that P=Det(M) . We prove that the determinantal complexity of the polynomial ∑ n i=1 x n i is at least 1.5n−3 . For every n -variate polynomial of degree d , the determinantal complexity is trivially at least d , and it is a long standing open problem to prove a lower bound which is super linear in max{n,d} . Our result is the first lower bound for any explicit polynomial which is bigger by a constant factor than max{n,d} , and improves upon the prior best bound of n+1 , proved by Alper, Bogart and Velasco [ABV17] for the same polynomial.

Read more
Computational Complexity

A Multistage View on 2-Satisfiability

We study q -SAT in the multistage model, focusing on the linear-time solvable 2-SAT. Herein, given a sequence of q -CNF fomulas and a non-negative integer d , the question is whether there is a sequence of satisfying truth assignments such that for every two consecutive truth assignments, the number of variables whose values changed is at most d . We prove that Multistage 2-SAT is NP-hard even in quite restricted cases. Moreover, we present parameterized algorithms (including kernelization) for Multistage 2-SAT and prove them to be asymptotically optimal.

Read more
Computational Complexity

A New Fast Computation of a Permanent

This paper proposes a general algorithm called Store-zechin for quickly computing the permanent of an arbitrary square matrix. Its key idea is storage, multiplexing, and recursion. That is, in a recursive process, some sub-terms which have already been calculated are no longer calculated, but are directly substituted with the previous calculation results. The new algorithm utilizes sufficiently computer memories and stored data to speed the computation of a permanent. The Analyses show that computating the permanent of an n * n matrix by Store-zechin requires (2^(n - 1)- 1)n multiplications and (2^(n-1))(n - 2)+ 1 additions while does (2^n - 1)n + 1 multiplications and (2^n - n)(n + 1)- 2 additions by the Ryser algorithm, and does (2^(n - 1))n + (n + 2) multiplications and (2^(n - 1))(n + 1)+ (n^2 - n -1) additions by the R-N-W algorithm. Therefore, Store-zechin is excellent more than the latter two algorithms, and has a better application prospect.

Read more
Computational Complexity

A New Minimax Theorem for Randomized Algorithms

The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function f with finite domain, there is a distribution μ over the domain of f such that computing f to error ϵ against inputs from μ is just as hard as computing f to error ϵ on worst-case inputs. Notably, however, the distribution μ depends on the target error level ϵ : the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution μ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo's hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann's minimax theorem or linear programming duality. First, we use Sion's minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as "forecasting algorithms" evaluated by a proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.

Read more
Computational Complexity

A Note on the Concrete Hardness of the Shortest Independent Vectors Problem in Lattices

Blömer and Seifert showed that SIVP 2 is NP-hard to approximate by giving a reduction from CVP 2 to SIVP 2 for constant approximation factors as long as the CVP instance has a certain property. In order to formally define this requirement on the CVP instance, we introduce a new computational problem called the Gap Closest Vector Problem with Bounded Minima. We adapt the proof of Blömer and Seifert to show a reduction from the Gap Closest Vector Problem with Bounded Minima to SIVP for any ℓ p norm for some constant approximation factor greater than 1 . In a recent result, Bennett, Golovnev and Stephens-Davidowitz showed that under Gap-ETH, there is no 2 o(n) -time algorithm for approximating CVP p up to some constant factor γ≥1 for any 1≤p≤∞ . We observe that the reduction in their paper can be viewed as a reduction from Gap3SAT to the Gap Closest Vector Problem with Bounded Minima. This, together with the above mentioned reduction, implies that, under Gap-ETH, there is no 2 o(n) -time algorithm for approximating SIVP p up to some constant factor γ≥1 for any 1≤p≤∞ .

Read more
Computational Complexity

A Parallel Repetition Theorem for the GHZ Game

We prove that parallel repetition of the (3-player) GHZ game reduces the value of the game polynomially fast to 0. That is, the value of the GHZ game repeated in parallel t times is at most t −Ω(1) . Previously, only a bound of ≈ 1 α(t) , where α is the inverse Ackermann function, was known. The GHZ game was recently identified by Dinur, Harsha, Venkat and Yuen as a multi-player game where all existing techniques for proving strong bounds on the value of the parallel repetition of the game fail. Indeed, to prove our result we use a completely new proof technique. Dinur, Harsha, Venkat and Yuen speculated that progress on bounding the value of the parallel repetition of the GHZ game may lead to further progress on the general question of parallel repetition of multi-player games. They suggested that the strong correlations present in the GHZ question distribution represent the "hardest instance" of the multi-player parallel repetition problem. Another motivation for studying the parallel repetition of the GHZ game comes from the field of quantum information. The GHZ game, first introduced by Greenberger, Horne and Zeilinger, is a central game in the study of quantum entanglement and has been studied in numerous works. For example, it is used for testing quantum entanglement and for device-independent quantum cryptography. In such applications a game is typically repeated to reduce the probability of error, and hence bounds on the value of the parallel repetition of the game may be useful.

Read more
Computational Complexity

A Polynomial Degree Bound on Equations of Non-rigid Matrices and Small Linear Circuits

We show that there is a defining equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: that is, we show that for every large enough field F , there is a non-zero n 2 -variate polynomial P∈F[ x 1,1 ,…, x n,n ] of degree at most poly(n) such that every matrix M which can be written as a sum of a matrix of rank at most n/100 and a matrix of sparsity at most n 2 /100 satisfies P(M)=0 . This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer and Landsberg [GHIL16] and improves the best upper bound known for this problem down from exp( n 2 ) [KLPS14, GHIL16] to poly(n) . We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n 2 /200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [SV15] to construct low degree "universal" maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [KI04].

Read more
Computational Complexity

A Simpler NP-Hardness Proof for Familial Graph Compression

This document presents a simpler proof showcasing the NP-hardness of Familial Graph Compression.

Read more
Computational Complexity

A Strong XOR Lemma for Randomized Query Complexity

We give a strong direct sum theorem for computing xor∘g . Specifically, we show that for every function g and every k≥2 , the randomized query complexity of computing the xor of k instances of g satisfies $\overline{R}_\eps(xor\circ g) = \Theta(k \overline{R}_{\eps/k}(g))$. This matches the naive success amplification upper bound and answers a conjecture of Blais and Brody (CCC19). As a consequence of our strong direct sum theorem, we give a total function g for which R(xor∘g)=Θ(klog(k)⋅R(g)) , answering an open question from Ben-David et al.(arXiv:2006.10957v1).

Read more

Ready to get started?

Join us today