Featured Researches

Computational Complexity

Barrington Plays Cards: The Complexity of Card-based Protocols

In this paper we study the computational complexity of functions that have efficient card-based protocols. Card-based protocols were proposed by den Boer [EUROCRYPT '89] as a means for secure two-party computation. Our contribution is two-fold: We classify a large class of protocols with respect to the computational complexity of functions they compute, and we propose other encodings of inputs which require fewer cards than the usual 2-card representation.

Read more
Computational Complexity

Between the deterministic and non-deterministic query complexity

We consider problems that can be solved by asking certain queries. The deterministic query complexity D(P) of a problem P is the smallest number of queries needed to ask in order to find the solution (in the worst case), while the non-deterministic query complexity D 0 (P) is the smallest number of queries needed to ask, in case we know the solution, to prove that it is indeed the solution (in the worst case). Equivalently, D(P) is the largest number of queries needed to find the solution in case an Adversary is answering the queries, while D 0 (P) is the largest number of queries needed to find the solution in case an Adversary chooses the input. We define a series of quantities between these two values, D k (P) is the largest number of queries needed to find the solution in case an Adversary chooses the input, and answers the queries, but he can change the input at most k times. We give bounds on D k (P) for various problems P .

Read more
Computational Complexity

Block Rigidity: Strong Multiplayer Parallel Repetition implies Super-Linear Lower Bounds for Turing Machines

We prove that a sufficiently strong parallel repetition theorem for a special case of multiplayer (multiprover) games implies super-linear lower bounds for multi-tape Turing machines with advice. To the best of our knowledge, this is the first connection between parallel repetition and lower bounds for time complexity and the first major potential implication of a parallel repetition theorem with more than two players. Along the way to proving this result, we define and initiate a study of block rigidity, a weakening of Valiant's notion of rigidity. While rigidity was originally defined for matrices, or, equivalently, for (multi-output) linear functions, we extend and study both rigidity and block rigidity for general (multi-output) functions. Using techniques of Paul, Pippenger, Szemerédi and Trotter, we show that a block-rigid function cannot be computed by multi-tape Turing machines that run in linear (or slightly super-linear) time, even in the non-uniform setting, where the machine gets an arbitrary advice tape. We then describe a class of multiplayer games, such that, a sufficiently strong parallel repetition theorem for that class of games implies an explicit block-rigid function. The games in that class have the following property that may be of independent interest: for every random string for the verifier (which, in particular, determines the vector of queries to the players), there is a unique correct answer for each of the players, and the verifier accepts if and only if all answers are correct. We refer to such games as independent games. The theorem that we need is that parallel repetition reduces the value of games in this class from v to v Ω(n) , where n is the number of repetitions. As another application of block rigidity, we show conditional size-depth tradeoffs for boolean circuits, where the gates compute arbitrary functions over large sets.

Read more
Computational Complexity

Block-Structured Integer and Linear Programming in Strongly Polynomial and Near Linear Time

We consider integer and linear programming problems for which the linear constraints exhibit a (recursive) block-structure: The problem decomposes into independent and efficiently solvable sub-problems if a small number of constraints is deleted. A prominent example are n -fold integer programming problems and their generalizations which have received considerable attention in the recent literature. The previously known algorithms for these problems are based on the augmentation framework, a tailored integer programming variant of local search. In this paper we propose a different approach. Our algorithm relies on parametric search and a new proximity bound. We show that block-structured linear programming can be solved efficiently via an adaptation of a parametric search framework by Norton, Plotkin, and Tardos in combination with Megiddo's multidimensional search technique. This also forms a subroutine of our algorithm for the integer programming case by solving a strong relaxation of it. Then we show that, for any given optimal vertex solution of this relaxation, there is an optimal integer solution within ℓ 1 -distance independent of the dimension of the problem. This in turn allows us to find an optimal integer solution efficiently. We apply our techniques to integer and linear programming with n -fold structure or bounded dual treedepth, two benchmark problems in this field. We obtain the first algorithms for these cases that are both near-linear in the dimension of the problem and strongly polynomial. Moreover, unlike the augmentation algorithms, our approach is highly parallelizable.

Read more
Computational Complexity

Chaitin's Omega and an Algorithmic Phase Transition

We consider the statistical mechanical ensemble of bit string histories that are computed by a universal Turing machine. The role of the energy is played by the program size. We show that this ensemble has a first-order phase transition at a critical temperature, at which the partition function equals Chaitin's halting probability Ω . This phase transition has curious properties: the free energy is continuous near the critical temperature, but almost jumps: it converges more slowly to its finite critical value than any computable function. At the critical temperature, the average size of the bit strings diverges. We define a non-universal Turing machine that approximates this behavior of the partition function in a computable way by a super-logarithmic singularity, and discuss its thermodynamic properties. We also discuss analogies and differences between Chaitin's Omega and the partition functions of a quantum mechanical particle, a spin model, random surfaces, and quantum Turing machines. For universal Turing machines, we conjecture that the ensemble of bit string histories at the critical temperature has a continuum formulation in terms of a string theory.

Read more
Computational Complexity

Characterizations and approximability of hard counting classes below #P

An important objective of research in counting complexity is to understand which counting problems are approximable. In this quest, the complexity class TotP, a hard subclass of #P, is of key importance, as it contains self-reducible counting problems with easy decision version, thus eligible to be approximable. Indeed, most problems known so far to admit an fpras fall into this class. An open question raised recently by the community of descriptive complexity is to find a logical characterization of TotP and of robust subclasses of TotP. In this work we define two subclasses of TotP, in terms of descriptive complexity, both of which are robust in the sense that they have natural complete problems, which are defined in terms of satisfiability of Boolean formulae. We then explore the relationship between the class of approximable counting problems and TotP. We prove that TotP ⊈ FPRAS if and only if NP ≠ RP and FPRAS ⊈ TotP unless RP = P. To this end we introduce two ancillary classes that can both be seen as counting versions of RP. We further show that FPRAS lies between one of these classes and a counting version of BPP. Finally, we provide a complete picture of inclusions among all the classes defined or discussed in this paper with respect to different conjectures about the NP vs. RP vs. P questions.

Read more
Computational Complexity

Circuit Satisfiability Problem for circuits of small complexity

The following problem is considered. A Turing machine M , that accepts a string of fixed length t as input, runs for a time not exceeding a fixed value n and is guaranteed to produce a binary output, is given. It's required to find a string X such that M(X)=1 effectively in terms of t , n , the size of the alphabet of M and the number of states of M . The problem is close to the well-known Circuit Satisfiability Problem. The difference from Circuit Satisfiability Problem is that when reduced to Circuit Satisfiability Problem, we get circuits with a rich internal structure (in particular, these are circuits of small Kolmogorov complexity). The proof system, operating with potential proofs of the fact that, for a given machine M , the string X does not exist, is provided, its completeness is proved and the algorithm guaranteed to find a proof of the absence of the string X in the case of its actual absence is presented (in the worst case, the algorithm is exponential, but in a wide class of interesting cases it works in polynomial time). We present an algorithm searching for the string X , for which its efficiency was neither tested, nor proven, and it may require serious improvement in the future, so it can be regarded as an idea. We also discuss first steps towards solving a more complex problem similar to this one: a Turing machine M , that accepts two strings X and Y of fixed length and running for a time that does not exceed a fixed value, is given; it is required to build an algorithm N that builds a string Y=N(X) for any string X , such that M(X,Y)=1 (details in the introduction).

Read more
Computational Complexity

Circuit equivalence in 2-nilpotent algebras

The circuit equivalence problem of a finite algebra A is the computational problem of deciding whether two circuits over A define the same function or not. This problem not just generalises the equivalence problem for Boolean circuits, but is also of high interest in universal algebra, as it models the problems of checking identities in A . In this paper we discuss the complexity for algebras from congruence modular varieties. A partial classification was already given by Idziak and Krzaczkowski, leaving essentially only a gap for nilpotent but not supernilpotent algebras. We start a systematic study of this open case, proving that the circuit equivalence problem is in P for 2 -nilpotent such algebras.

Read more
Computational Complexity

Clique Is Hard on Average for Regular Resolution

We prove that for k≪ n − − √ 4 regular resolution requires length n Ω(k) to establish that an Erdős-Rényi graph with appropriately chosen edge density does not contain a k -clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional n Ω(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.

Read more
Computational Complexity

Coarse-Grained Complexity for Dynamic Algorithms

To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional ``coarse-grained'' approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer. What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A research program for proving polynomial unconditional lower bounds for dynamic OV in the cell-probe model is motivated by the fact that many conditional lower bounds can be shown via reductions from the dynamic OV problem. Since the cell-probe model is more powerful than word RAM and has historically allowed smaller upper bounds, it might turn out that dynamic OV is easy in the cell-probe model, making this research direction infeasible. Our theory implies that if this is the case, there will be very interesting algorithmic consequences: If dynamic OV can be maintained in polylogarithmic worst-case update time in the cell-probe model, then so are several important dynamic problems such as k -edge connectivity, (1+ϵ) -approximate mincut, (1+ϵ) -approximate matching, planar nearest neighbors, Chan's subset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, single source reachability, Chan's subset union, and 3-vs-4 diameter. Lower bounds for k -edge connectivity via dynamic OV? (see the full abstract in the pdf file).

Read more

Ready to get started?

Join us today