Featured Researches

Computational Complexity

Interactive Verifiable Polynomial Evaluation

Cloud computing platforms have created the possibility for computationally limited users to delegate demanding tasks to strong but untrusted servers. Verifiable computing algorithms help build trust in such interactions by enabling the server to provide a proof of correctness of his results which the user can check very efficiently. In this paper, we present a doubly-efficient interactive algorithm for verifiable polynomial evaluation. Unlike the mainstream literature on verifiable computing, the soundness of our algorithm is information-theoretic and cannot be broken by a computationally unbounded server. By relying on basic properties of error correcting codes, our algorithm enforces a dishonest server to provide false results to problems which become progressively easier to verify. After roughly logd rounds, the user can verify the response of the server against a look-up table that has been pre-computed during an initialization phase. For a polynomial of degree d , we achieve a user complexity of O( d ϵ ) , a server complexity of O( d 1+ϵ ) , a round complexity of O(logd) and an initialization complexity of O( d 1+ϵ ) .

Read more
Computational Complexity

Interesting Open Problem Related to Complexity of Computing the Fourier Transform and Group Theory

The Fourier Transform is one of the most important linear transformations used in science and engineering. Cooley and Tukey's Fast Fourier Transform (FFT) from 1964 is a method for computing this transformation in time O(nlogn) . From a lower bound perspective, relatively little is known. Ailon shows in 2013 an Ω(nlogn) bound for computing the normalized Fourier Transform assuming only unitary operations on pairs of coordinates is allowed. The goal of this document is to describe a natural open problem that arises from this work, which is related to group theory, and in particular to representation theory.

Read more
Computational Complexity

Intermediate problems in modular circuits satisfiability

In arXiv:1710.08163 a generalization of Boolean circuits to arbitrary finite algebras had been introduced and applied to sketch P versus NP-complete borderline for circuits satisfiability over algebras from congruence modular varieties. However the problem for nilpotent (which had not been shown to be NP-hard) but not supernilpotent algebras (which had been shown to be polynomial time) remained open. In this paper we provide a broad class of examples, lying in this grey area, and show that, under the Exponential Time Hypothesis and Strong Exponential Size Hypothesis (saying that Boolean circuits need exponentially many modular counting gates to produce boolean conjunctions of any arity), satisfiability over these algebras have intermediate complexity between Ω( 2 c log h−1 n ) and O( 2 c log h n ) , where h measures how much a nilpotent algebra fails to be supernilpotent. We also sketch how these examples could be used as paradigms to fill the nilpotent versus supernilpotent gap in general. Our examples are striking in view of the natural strong connections between circuits satisfiability and Constraint Satisfaction Problem for which the dichotomy had been shown by Bulatov and Zhuk.

Read more
Computational Complexity

Is it Easier to Prove Theorems that are Guaranteed to be True?

Consider the following two fundamental open problems in complexity theory: (a) Does a hard-on-average language in NP imply the existence of one-way functions?, or (b) Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes. Both one-way functions and problems in TFNP can be interpreted as promise-true distributional NP search problems---namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, "It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true." This result follows from a more general study of interactive puzzles---a generalization of average-case hardness in NP---and in particular, a novel round-collapse theorem for computationally-sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information-theoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence of a hard-on-average problem in NP/poly.

Read more
Computational Complexity

Is the space complexity of planted clique recovery the same as that of detection?

We study the planted clique problem in which a clique of size k is planted in an Erdős-Rényi graph G(n, 1/2), and one is interested in either detecting or recovering this planted clique. This problem is interesting because it is widely believed to show a statistical-computational gap at clique size k=sqrt{n}, and has emerged as the prototypical problem with such a gap from which average-case hardness of other statistical problems can be deduced. It also displays a tight computational connection between the detection and recovery variants, unlike other problems of a similar nature. This wide investigation into the computational complexity of the planted clique problem has, however, mostly focused on its time complexity. In this work, we ask- Do the statistical-computational phenomena that make the planted clique an interesting problem also hold when we use `space efficiency' as our notion of computational efficiency? It is relatively easy to show that a positive answer to this question depends on the existence of a O(log n) space algorithm that can recover planted cliques of size k = Omega(sqrt{n}). Our main result comes very close to designing such an algorithm. We show that for k=Omega(sqrt{n}), the recovery problem can be solved in O((log*{n}-log*{k/sqrt{n}}) log n) bits of space. 1. If k = omega(sqrt{n}log^{(l)}n) for any constant integer l > 0, the space usage is O(log n) bits. 2.If k = Theta(sqrt{n}), the space usage is O(log*{n} log n) bits. Our result suggests that there does exist an O(log n) space algorithm to recover cliques of size k = Omega(sqrt{n}), since we come very close to achieving such parameters. This provides evidence that the statistical-computational phenomena that (conjecturally) hold for planted clique time complexity also (conjecturally) hold for space complexity.

Read more
Computational Complexity

Isomorphism problems for tensors, groups, and cubic forms: completeness and reductions

In this paper we consider the problems of testing isomorphism of tensors, p -groups, cubic forms, algebras, and more, which arise from a variety of areas, including machine learning, group theory, and cryptography. These problems can all be cast as orbit problems on multi-way arrays under different group actions. Our first two main results are: 1. All the aforementioned isomorphism problems are equivalent under polynomial-time reductions, in conjunction with the recent results of Futorny-Grochow-Sergeichuk (Lin. Alg. Appl., 2019). 2. Isomorphism of d -tensors reduces to isomorphism of 3-tensors, for any d≥3 . Our results suggest that these isomorphism problems form a rich and robust equivalence class, which we call Tensor Isomorphism-complete, or TI-complete. We then leverage the techniques used in the above results to prove two first-of-their-kind results for Group Isomorphism (GpI): 3. We give a reduction from GpI for p -groups of exponent p and small class ( c<p ) to GpI for p -groups of exponent p and class 2. The latter are widely believed to be the hardest cases of GpI, but as far as we know, this is the first reduction from any more general class of groups to this class. 4. We give a search-to-decision reduction for isomorphism of p -groups of exponent p and class 2 in time |G | O(loglog|G|) . While search-to-decision reductions for Graph Isomorphism (GI) have been known for more than 40 years, as far as we know this is the first non-trivial search-to-decision reduction in the context of GpI. Our main technique for (1), (3), and (4) is a linear-algebraic analogue of the classical graph coloring gadget, which was used to obtain the search-to-decision reduction for GI. This gadget construction may be of independent interest and utility. The technique for (2) gives a method for encoding an arbitrary tensor into an algebra.

Read more
Computational Complexity

Iterated Type Partitions

This paper deals with the complexity of some natural graph problems when parametrized by {measures that are restrictions of} clique-width, such as modular-width and neighborhood diversity. The main contribution of this paper is to introduce a novel parameter, called iterated type partition, that can be computed in polynomial time and nicely places between modular-width and neighborhood diversity. We prove that the Equitable Coloring problem is W[1]-hard when parametrized by the iterated type partition. This result extends to modular-width, answering an open question about the possibility to have FPT algorithms for Equitable Coloring when parametrized by modular-width. Moreover, we show that the Equitable Coloring problem is instead FTP when parameterized by neighborhood diversity. Furthermore, we present simple and fast FPT algorithms parameterized by iterated type partition that provide optimal solutions for several graph problems; in particular, this paper presents algorithms for the Dominating Set, the Vertex Coloring and the Vertex Cover problems. While the above problems are already known to be FPT with respect to modular-width, the novel algorithms are both simpler and more efficient: For the Dominating set and Vertex Cover problems, our algorithms output an optimal set in time O( 2 t +poly(n)) , while for the Vertex Coloring problem, our algorithm outputs an optimal set in time O( t 2.5t+o(t) logn+poly(n)) , where n and t are the size and the iterated type partition of the input graph, respectively.

Read more
Computational Complexity

KRW Composition Theorems via Lifting

One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., P⊈ NC 1 ). Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995) suggested to approach this problem by proving that depth complexity behaves "as expected" with respect to the composition of functions f⋄g . They showed that the validity of this conjecture would imply that P⊈ NC 1 . Several works have made progress toward resolving this conjecture by proving special cases. In particular, these works proved the KRW conjecture for every outer function f , but only for few inner functions g . Thus, it is an important challenge to prove the KRW conjecture for a wider range of inner functions. In this work, we extend significantly the range of inner functions that can be handled. First, we consider the monotone version of the KRW conjecture. We prove it for every monotone inner function g whose depth complexity can be lower bounded via a query-to-communication lifting theorem. This allows us to handle several new and well-studied functions such as the s-t -connectivity, clique, and generation functions. In order to carry this progress back to the non-monotone setting, we introduce a new notion of semi-monotone composition, which combines the non-monotone complexity of the outer function f with the monotone complexity of the inner function g . In this setting, we prove the KRW conjecture for a similar selection of inner functions g , but only for a specific choice of the outer function f .

Read more
Computational Complexity

Kronecker powers of tensors and Strassen's laser method

We answer a question, posed implicitly by Coppersmith-Winogrand and Buergisser et. al. and explicitly by Blaeser, showing the border rank of the Kronecker square of the little Coppersmith-Winograd tensor is the square of the border rank of the tensor for all q>2, a negative result for complexity theory. We further show that when q>4, the analogous result holds for the Kronecker cube. In the positive direction, we enlarge the list of explicit tensors potentially useful for the laser method. We observe that a well-known tensor, the 3x3 determinant polynomial regarded as a tensor, could potentially be used in the laser method to prove the exponent of matrix multiplication is two. Because of this, we prove new upper bounds on its Waring rank and rank (both 18), border rank and Waring border rank (both 17), which, in addition to being promising for the laser method, are of interest in their own right. We discuss "skew" cousins of the little Coppersmith-Winograd tensor and indicate whey they may be useful for the laser method. We establish general results regarding border ranks of Kronecker powers of tensors, and make a detailed study of Kronecker squares of tensors of dimensions (3,3,3). In particular we show numerically that for generic tensors in this space, the rank and border rank are strictly sub-multiplicative.

Read more
Computational Complexity

LaserTank is NP-complete

We show that the classical game LaserTank is NP -complete, even when the tank movement is restricted to a single column and the only blocks appearing on the board are mirrors and solid blocks. We show this by reducing 3 -SAT instances to LaserTank puzzles.

Read more

Ready to get started?

Join us today