Featured Researches

Computational Complexity

Locally testable codes via high-dimensional expanders

Locally testable codes (LTC) are error-correcting codes that have a local tester which can distinguish valid codewords from words that are "far" from all codewords by probing a given word only at a very few (sublinear, typically constant) number of locations. Such codes form the combinatorial backbone of PCPs. A major open problem is whether there exist LTCs with positive rate, constant relative distance and testable with a constant number of queries. In this paper, we present a new approach towards constructing such LTCs using the machinery of high-dimensional expanders. To this end, we consider the Tanner representation of a code, which is specified by a graph and a base code. Informally, our result states that if this graph is part of a high-dimensional expander then the local testability of the code follows from the local testability of the base code. This work unifies and generalizes the known results on testability of the Hadamard, Reed-Muller and lifted codes on the Subspace Complex, all of which are proved via local self correction. However, unlike previous results, constant rounds of self correction do not suffice as the diameter of the underlying test graph can be logarithmically large in a high-dimensional expander and not constant as in all known earlier results. We overcome this technical hurdle by performing iterative self correction with logarithmically many rounds and tightly controlling the error in each iteration using properties of the high-dimensional expander. Given this result, the missing ingredient towards constructing a constant-query LTC with positive rate and constant relative distance is an instantiation of a base code that interacts well with a constant-degree high-dimensional expander.

Read more
Computational Complexity

Log-rank and lifting for AND-functions

Let f:{0,1 } n →{0,1} be a boolean function, and let f ∧ (x,y)=f(x∧y) denote the AND-function of f , where x∧y denotes bit-wise AND. We study the deterministic communication complexity of f ∧ and show that, up to a logn factor, it is bounded by a polynomial in the logarithm of the real rank of the communication matrix of f ∧ . This comes within a logn factor of establishing the log-rank conjecturefor AND-functions with no assumptions on f . Our result stands in contrast with previous results on special cases of the log-rank conjecture, which needed significant restrictions on f such as monotonicity or low F 2 -degree. Our techniques can also be used to prove (within a logn factor) a lifting theorem for AND-functions, stating that the deterministic communication complexity of f ∧ is polynomially-related to the AND-decision tree complexity of f . The results rely on a new structural result regarding boolean functions f:{0,1 } n →{0,1} with a sparse polynomial representation, which may be of independent interest. We show that if the polynomial computing f has few monomials then the set system of the monomials has a small hitting set, of size poly-logarithmic in its sparsity. We also establish extensions of this result to multi-linear polynomials f:{0,1 } n →R with a larger range.

Read more
Computational Complexity

Logical characterizations of computational complexity classes

Descriptive complexity theory is an important area in the study of computational complexity. In this direction, it is possible to describe combinatorial problems exclusively by logical methods, without resorting to the use of complicated algorithms. The first work in this direction was written in 1974 by the American mathematician Fagin. The article describes the development of methods of the theory of descriptive complexity.

Read more
Computational Complexity

Logical depth for reversible Turing machines with an application to the rate of decrease in logical depth for general Turing machines

The logical depth of a {\em reversible} Turing machine equals the shortest running time of a shortest program for it. This is applied to show that the result in L.F. Antunes, A. Souto, and P.M.B. Vitányi, On the Rate of Decrease in Logical Depth, Theor. Comput. Sci., 702(2017), 60--64 is valid notwithstanding the error noted in Corrigendum P.M.B. Vitányi, Corrigendum to "On the rate of decrease in logical depth" by L.F. Antunes, A. Souto, and P.M.B. Vitányi [Theoret. Comput. Sci. 702 (2017) 60--64], {\em Theoret. Comput. Sci.}, this https URL . /

Read more
Computational Complexity

Low-Degree Hardness of Random Optimization Problems

We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Such problems arise widely in the theory of random graphs, theoretical computer science, and statistical physics. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising p-spin glass model, and (b) finding a large independent set in a sparse Erdos-Renyi graph. Two families of algorithms are considered: (a) low-degree polynomials of the input---a general framework that captures methods such as approximate message passing and local algorithms on sparse graphs, among others; and (b) the Langevin dynamics algorithm, a canonical Monte Carlo analogue of the gradient descent algorithm (applicable only for the spherical p-spin glass Hamiltonian). We show that neither family of algorithms can produce nearly optimal solutions with high probability. Our proof uses the fact that both models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objectives are above a certain threshold are either close or far from each other. The crux of our proof is the stability of both algorithms: a small perturbation of the input induces a small perturbation of the output. By an interpolation argument, such a stable algorithm cannot overcome the OGP barrier. The stability of the Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials is established using concepts from Gaussian and Boolean Fourier analysis, including noise sensitivity, hypercontractivity, and total influence.

Read more
Computational Complexity

Lower Bounding the AND-OR Tree via Symmetrization

We prove a simple, nearly tight lower bound on the approximate degree of the two-level AND - OR tree using symmetrization arguments. Specifically, we show that deg ˜ ( AND m ∘ OR n )= Ω ˜ ( mn − − − √ ) . We prove this lower bound via reduction to the OR function through a series of symmetrization steps, in contrast to most other proofs that involve formulating approximate degree as a linear program [BT13, She13, BDBGK18]. Our proof also demonstrates the power of a symmetrization technique involving Laurent polynomials (polynomials with negative exponents) that was previously introduced by Aaronson, Kothari, Kretschmer, and Thaler [AKKT19].

Read more
Computational Complexity

Lower Bounds Against Sparse Symmetric Functions of ACC Circuits: Expanding the Reach of # SAT Algorithms

We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in Quasi-NP=NTIME[ n (logn ) O(1) ] and NEXP do not have small circuits from various circuit classes C , by showing that C admits non-trivial satisfiability and/or # SAT algorithms which beat exhaustive search by a minor amount. In this paper, we present a new strong lower bound consequence of non-trivial # SAT algorithm for a circuit class C . Say a symmetric Boolean function f( x 1 ,…, x n ) is sparse if it outputs 1 on O(1) values of ∑ i x i . We show that for every sparse f , and for all "typical" C , faster # SAT algorithms for C circuits actually imply lower bounds against the circuit class f∘C , which may be stronger than C itself. In particular: # SAT algorithms for n k -size C -circuits running in 2 n / n k time (for all k ) imply NEXP does not have f∘C -circuits of polynomial size. # SAT algorithms for 2 n ϵ -size C -circuits running in 2 n− n ϵ time (for some ϵ>0 ) imply Quasi-NP does not have f∘C -circuits of polynomial size. Applying # SAT algorithms from the literature, one immediate corollary of our results is that Quasi-NP does not have EMAJ∘ ACC 0 ∘THR circuits of polynomial size, where EMAJ is the "exact majority" function, improving previous lower bounds against ACC 0 [Williams JACM'14] and ACC 0 ∘THR [Williams STOC'14], [Murray-Williams STOC'18]. This is the first nontrivial lower bound against such a circuit class.

Read more
Computational Complexity

Lower Bounds and Hardness Magnification for Sublinear-Time Shrinking Cellular Automata

The minimum circuit size problem (MCSP) is a string compression problem with a parameter s in which, given the truth table of a Boolean function over inputs of length n , one must answer whether it can be computed by a Boolean circuit of size at most s(n)≥n . Recently, McKay, Murray, and Williams (STOC, 2019) proved a hardness magnification result for MCSP involving (one-pass) streaming algorithms: For any reasonable s , if there is no poly(s(n)) -space streaming algorithm with poly(s(n)) update time for MCSP[s] , then P≠NP . We prove an analogous result for the (provably) strictly less capable model of shrinking cellular automata (SCAs), which are cellular automata whose cells can spontaneously delete themselves. We show every language accepted by an SCA can also be accepted by a streaming algorithm of similar complexity, and we identify two different aspects in which SCAs are more restricted than streaming algorithms. We also show there is a language which cannot be accepted by any SCA in o(n/logn) time, even though it admits an O(logn) -space streaming algorithm with O(logn) update time.

Read more
Computational Complexity

Lower Bounds for Semi-adaptive Data Structures via Corruption

In a dynamic data structure problem we wish to maintain an encoding of some data in memory, in such a way that we may efficiently carry out a sequence of queries and updates to the data. A long-standing open problem in this area is to prove an unconditional polynomial lower bound of a trade-off between the update time and the query time of an adaptive dynamic data structure computing some explicit function. Ko and Weinstein provided such lower bound for a restricted class of {\em semi-adaptive\} data structures, which compute the Disjointness function. There, the data are subsets x 1 ,…, x k and y of {1,…,n} , the updates can modify y (by inserting and removing elements), and the queries are an index i∈{1,…,k} (query i should answer whether x i and y are disjoint, i.e., it should compute the Disjointness function applied to ( x i ,y) ). The semi-adaptiveness places a restriction in how the data structure can be accessed in order to answer a query. We generalize the lower bound of Ko and Weinstein to work not just for the Disjointness, but for any function having high complexity under the smooth corruption bound.

Read more
Computational Complexity

Lower Bounds for XOR of Forrelations

The Forrelation problem, introduced by Aaronson [A10] and Aaronson and Ambainis [AA15], is a well studied problem in the context of separating quantum and classical models. Variants of this problem were used to give exponential separations between quantum and classical query complexity [A10, AA15]; quantum query complexity and bounded-depth circuits [RT19]; and quantum and classical communication complexity [GRT19]. In all these separations, the lower bound for the classical model only holds when the advantage of the protocol (over a random guess) is more than ≈1/ N − − √ , that is, the success probability is larger than ≈1/2+1/ N − − √ . To achieve separations when the classical protocol has smaller advantage, we study in this work the XOR of k independent copies of the Forrelation function (where k≪N ). We prove a very general result that shows that any family of Boolean functions that is closed under restrictions, whose Fourier mass at level 2k is bounded by α k , cannot compute the XOR of k independent copies of the Forrelation function with advantage better than O( α k N k/2 ) . This is a strengthening of a result of [CHLT19], that gave a similar result for k=1 , using the technique of [RT19]. As an application of our result, we give the first example of a partial Boolean function that can be computed by a simultaneous-message quantum protocol of cost polylog(N) (when players share polylog(N) EPR pairs), however, any classical interactive randomized protocol of cost at most o ~ ( N 1/4 ) , has quasipolynomially small advantage over a random guess. We also give the first example of a partial Boolean function that has a quantum query algorithm of cost polylog(N) , and such that, any constant-depth circuit of quasipolynomial size has quasipolynomially small advantage over a random guess.

Read more

Ready to get started?

Join us today