Featured Researches

Computational Complexity

Computer-Simulation Model Theory (P= NP is not provable)

The simulation hypothesis says that all the materials and events in the reality (including the universe, our body, our thinking, walking and etc) are computations, and the reality is a computer simulation program like a video game. All works we do (talking, reasoning, seeing and etc) are computations performed by the universe-computer which runs the simulation program. Inspired by the view of the simulation hypothesis (but independent of this hypothesis), we propose a new method of logical reasoning named "Computer-Simulation Model Theory", CSMT. Computer-Simulation Model Theory is an extension of Mathematical Model Theory where instead of mathematical-structures, computer-simulations are replaced, and the activity of reasoning and computing of the reasoner is also simulated in the model. (CSMT) argues that: For a formula ϕ , construct a computer simulation model S , such that 1- ϕ does not hold in S , and 2- the reasoner I ( human being, the one who lives inside the reality ) cannot distinguish S from the reality (R) , then I cannot prove ϕ in reality. Although CSMT is inspired by the simulation hypothesis, but this reasoning method is independent of the acceptance of this hypothesis. As we argue in this part, one may do not accept the simulation hypothesis, but knows CSMT a valid reasoning method. As an application of Computer-Simulation Model Theory, we study the famous problem P vs NP. We let ϕ≡[P=NP] and construct a computer simulation model E such that P=NP does not hold in E .

Read more
Computational Complexity

Conditional Dichotomy of Boolean Ordered Promise CSPs

Promise Constraint Satisfaction Problems (PCSPs) are a generalization of Constraint Satisfaction Problems (CSPs) where each predicate has a strong and a weak form and given a CSP instance, the objective is to distinguish if the strong form can be satisfied vs. even the weak form cannot be satisfied. Since their formal introduction by Austrin, Guruswami, and Håstad, there has been a flurry of works on PCSPs [BBKO19,KO19,WZ20]. The key tool in studying PCSPs is the algebraic framework developed in the context of CSPs where the closure properties of the satisfying solutions known as the polymorphisms are analyzed. The polymorphisms of PCSPs are much richer than CSPs. In the Boolean case, we still do not know if dichotomy for PCSPs exists analogous to Schaefer's dichotomy result for CSPs. In this paper, we study a special case of Boolean PCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate x?�y . In the algebraic framework, this is the special case of Boolean PCSPs when the polymorphisms are monotone functions. We prove that Boolean Ordered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1 Conjecture [BKM21] which is a perfect completeness surrogate of the Unique Games Conjecture. Assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP can be solved in polynomial time if for every ϵ>0 , it has polymorphisms where each coordinate has Shapley value at most ϵ , else it is NP-hard. The algorithmic part of our dichotomy is based on a structural lemma that Boolean monotone functions with each coordinate having low Shapley value have arbitrarily large threshold functions as minors. The hardness part proceeds by showing that the Shapley value is consistent under a uniformly random 2-to-1 minor. Of independent interest, we show that the Shapley value can be inconsistent under an adversarial 2-to-1 minor.

Read more
Computational Complexity

Conditional Hardness of Earth Mover Distance

The Earth Mover Distance (EMD) between two sets of points A,B⊆ R d with |A|=|B| is the minimum total Euclidean distance of any perfect matching between A and B . One of its generalizations is asymmetric EMD, which is the minimum total Euclidean distance of any matching of size |A| between sets of points A,B⊆ R d with |A|≤|B| . The problems of computing EMD and asymmetric EMD are well-studied and have many applications in computer science, some of which also ask for the EMD-optimal matching itself. Unfortunately, all known algorithms require at least quadratic time to compute EMD exactly. Approximation algorithms with nearly linear time complexity in n are known (even for finding approximately optimal matchings), but suffer from exponential dependence on the dimension. In this paper we show that significant improvements in exact and approximate algorithms for EMD would contradict conjectures in fine-grained complexity. In particular, we prove the following results: (1) Under the Orthogonal Vectors Conjecture, there is some c>0 such that EMD in Ω( c log ∗ n ) dimensions cannot be computed in truly subquadratic time. (2) Under the Hitting Set Conjecture, for every δ>0 , no truly subquadratic time algorithm can find a (1+1/ n δ ) -approximate EMD matching in ω(logn) dimensions. (3) Under the Hitting Set Conjecture, for every η=1/ω(logn) , no truly subquadratic time algorithm can find a (1+η) -approximate asymmetric EMD matching in ω(logn) dimensions.

Read more
Computational Complexity

Cons-free Programs and Complexity Classes between LOGSPACE and PTIME

Programming language concepts are used to give some new perspectives on a long-standing open problem: is logspace = ptime ?

Read more
Computational Complexity

Consensus-Halving: Does It Ever Get Easier?

In the ε -Consensus-Halving problem, a fundamental problem in fair division, there are n agents with valuations over the interval [0,1] , and the goal is to divide the interval into pieces and assign a label " + " or " − " to each piece, such that every agent values the total amount of " + " and the total amount of " − " almost equally. The problem was recently proven by Filos-Ratsikas and Goldberg [2019] to be the first "natural" complete problem for the computational class PPA, answering a decade-old open question. In this paper, we examine the extent to which the problem becomes easy to solve, if one restricts the class of valuation functions. To this end, we provide the following contributions. First, we obtain a strengthening of the PPA-hardness result of [Filos-Ratsikas and Goldberg, 2019], to the case when agents have piecewise uniform valuations with only two blocks. We obtain this result via a new reduction, which is in fact conceptually much simpler than the corresponding one in [Filos-Ratsikas and Goldberg, 2019]. Then, we consider the case of single-block (uniform) valuations and provide a parameterized polynomial time algorithm for solving ε -Consensus-Halving for any ε , as well as a polynomial-time algorithm for ε=1/2 ; these are the first algorithmic results for the problem. Finally, an important application of our new techniques is the first hardness result for a generalization of Consensus-Halving, the Consensus- 1/k -Division problem. In particular, we prove that ε -Consensus- 1/3 -Division is PPAD-hard.

Read more
Computational Complexity

Consequences of APSP, triangle detection, and 3SUM hardness for separation between determinism and non-determinism

We present implications from the known conjectures like APSP, 3SUM and ETH in a form of a negated containment of a linear-time with a non-deterministic logarithmic-bit oracle in a respective deterministic bounded-time class They are different for different conjectures and they exhibit in particular the dependency on the input range parameters.

Read more
Computational Complexity

Consistency of circuit lower bounds with bounded theories

Proving that there are problems in P NP that require boolean circuits of super-linear size is a major frontier in complexity theory. While such lower bounds are known for larger complexity classes, existing results only show that the corresponding problems are hard on infinitely many input lengths. For instance, proving almost-everywhere circuit lower bounds is open even for problems in MAEXP . Giving the notorious difficulty of proving lower bounds that hold for all large input lengths, we ask the following question: Can we show that a large set of techniques cannot prove that NP is easy infinitely often? Motivated by this and related questions about the interaction between mathematical proofs and computations, we investigate circuit complexity from the perspective of logic. Among other results, we prove that for any parameter k≥1 it is consistent with theory T that computational class C⊈i.o.SIZE( n k ) , where (T,C) is one of the pairs: T= T 1 2 and C= P NP , T= S 1 2 and C=NP , T=PV and C=P . In other words, these theories cannot establish infinitely often circuit upper bounds for the corresponding problems. This is of interest because the weaker theory PV already formalizes sophisticated arguments, such as a proof of the PCP Theorem. These consistency statements are unconditional and improve on earlier theorems of [KO17] and [BM18] on the consistency of lower bounds with PV .

Read more
Computational Complexity

Constant-Space, Constant-Randomness Verifiers with Arbitrarily Small Error

We study the capabilities of probabilistic finite-state machines that act as verifiers for certificates of language membership for input strings, in the regime where the verifiers are restricted to toss some fixed nonzero number of coins regardless of the input size. Say and Yakaryılmaz showed that the class of languages that could be verified by these machines within an error bound strictly less than 1/2 is precisely NL, but their construction yields verifiers with error bounds that are very close to 1/2 for most languages in that class when the definition of "error" is strengthened to include looping forever without giving a response. We characterize a subset of NL for which verification with arbitrarily low error is possible by these extremely weak machines. It turns out that, for any ε>0 , one can construct a constant-coin, constant-space verifier operating within error ε for every language that is recognizable by a linear-time multi-head nondeterministic finite automaton (2nfa( k )). We discuss why it is difficult to generalize this method to all of NL, and give a reasonably tight way to relate the power of linear-time 2nfa( k )'s to simultaneous time-space complexity classes defined in terms of Turing machines.

Read more
Computational Complexity

Constructing Segmented Differentiable Quadratics to Determine Algorithmic Run Times and Model Non-Polynomial Functions

We propose an approach to determine the continual progression of algorithmic efficiency, as an alternative to standard calculations of time complexity, likely, but not exclusively, when dealing with data structures with unknown maximum indexes and with algorithms that are dependent on multiple variables apart from just input size. The proposed method can effectively determine the run time behavior F at any given index x , as well as ∂F ∂x , as a function of only one or multiple arguments, by combining n 2 quadratic segments, based upon the principles of Lagrangian Polynomials and their respective secant lines. Although the approach used is designed for analyzing the efficacy of computational algorithms, the proposed method can be used within the pure mathematical field as a novel way to construct non-polynomial functions, such as log 2 n or n+1 n−2 , as a series of segmented differentiable quadratics to model functional behavior and reoccurring natural patterns. After testing, our method had an average accuracy of above of 99\% with regard to functional resemblance.

Read more
Computational Complexity

Continuous LWE

We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomial-time quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al.~FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al.~ICML 2019).

Read more

Ready to get started?

Join us today