Featured Researches

Computational Complexity

Necessary and Sufficient Condition for Satisfiability of a Boolean Formula in CNF and its Implications on P versus NP problem

Boolean satisfiability problem has applications in various fields. An efficient algorithm to solve satisfiability problem can be used to solve many other problems efficiently. The input of satisfiability problem is a finite set of clauses. In this paper, properties of clauses have been studied. A type of clauses have been defined, called fully populated clauses, which contains each variable exactly once. A relationship between two unequal fully populated clauses has been defined, called sibling clauses. It has been found that, if one fully populated clause is false, for a truth assignment, then all it's sibling clauses will be true for the same truth assignment. Which leads to the necessary and sufficient condition for satisfiability of a boolean formula, in CNF. The necessary and sufficient condition has been used to develop a novel algorithm to solve boolean satisfiability problem in polynomial time, which implies, P equals NP. Further, some optimisations have been provided that can be integrated with the algorithm for better performance.

Read more
Computational Complexity

New Exponential Size Lower Bounds against Depth Four Circuits of Bounded Individual Degree

Kayal, Saha and Tavenas [Theory of Computing, 2018] showed that for all large enough integers n and d such that d≥ω(logn) , any syntactic depth four circuit of bounded individual degree δ=o(d) that computes the Iterated Matrix Multiplication polynomial ( IM M n,d ) must have size n Ω( d/δ √ ) . Unfortunately, this bound deteriorates as the value of δ increases. Further, the bound is superpolynomial only when δ is o(d) . It is natural to ask if the dependence on δ in the bound could be weakened. Towards this, in an earlier result [STACS, 2020], we showed that for all large enough integers n and d such that d=Θ( log 2 n) , any syntactic depth four circuit of bounded individual degree δ≤ n 0.2 that computes IM M n,d must have size n Ω(logn) . In this paper, we make further progress by proving that for all large enough integers n and d , and absolute constants a and b such that ω( log 2 n)≤d≤ n a , any syntactic depth four circuit of bounded individual degree δ≤ n b that computes IM M n,d must have size n Ω( d √ ) . Our bound is obtained by carefully adapting the proof of Kumar and Saraf [SIAM J. Computing, 2017] to the complexity measure introduced in our earlier work [STACS, 2020].

Read more
Computational Complexity

New Techniques for Proving Fine-Grained Average-Case Hardness

The recent emergence of fine-grained cryptography strongly motivates developing an average-case analogue of Fine-Grained Complexity (FGC). This paper defines new versions of OV, k SUM and zero- k -clique that are both worst-case and average-case fine-grained hard assuming the core hypotheses of FGC. We then use these as a basis for fine-grained hardness and average-case hardness of other problems. The new problems represent their inputs in a certain ``factored'' form. We call them ``factored''-OV, ``factored''-zero- k -clique and ``factored''- 3 SUM. We show that factored- k -OV and factored k SUM are equivalent and are complete for a class of problems defined over Boolean functions. Factored zero- k -clique is also complete, for a different class of problems. Our hard factored problems are also simple enough that we can reduce them to many other problems, e.g.~to edit distance, k -LCS and versions of Max-Flow. We further consider counting variants of the factored problems and give WCtoACFG reductions for them for a natural distribution. Through FGC reductions we then get average-case hardness for well-studied problems like regular expression matching from standard worst-case FGC assumptions. To obtain our WCtoACFG reductions, we formalize the framework of [Boix-Adsera et al. 2019] that was used to give a WCtoACFG reduction for counting k -cliques. We define an explicit property of problems such that if a problem has that property one can use the framework on the problem to get a WCtoACFG self reduction. We then use the framework to slightly extend Boix-Adsera et al.'s average-case counting k -cliques result to average-case hardness for counting arbitrary subgraph patterns of constant size in k -partite graphs...

Read more
Computational Complexity

No-Rainbow Problem and the Surjective Constraint Satisfaction Problem

The Surjective Constraint Satisfaction Problem (SCSP) is the problem of deciding whether there exists a surjective assignment to a set of variables subject to some specified constraints, where a surjective assignment is an assignment containing all elements of the domain. In this paper we show that the most famous SCSP, called No-Rainbow Problem, is NP-Hard. Additionally, we disprove the conjecture saying that the SCSP over a constraint language Γ and the CSP over the same language with constants have the same computational complexity up to poly-time reductions. Our counter-example also shows that the complexity of the SCSP cannot be described in terms of polymorphisms of the constraint language.

Read more
Computational Complexity

Notes on Hazard-Free Circuits

The problem of constructing hazard-free Boolean circuits (those avoiding electronic glitches) dates back to the 1940s and is an important problem in circuit design and even in cybersecurity. We show that a DeMorgan circuit is hazard-free if and only if the circuit produces (purely syntactically) all prime implicants as well as all prime implicates of the Boolean function it computes. This extends to arbitrary DeMorgan circuits a classical result of Eichelberger [IBM J. Res. Develop., 9 (1965)] showing this property for special depth-two circuits. Via an amazingly simple proof, we also strengthen a recent result Ikenmeyer et al. [J. ACM, 66:4 (2019)]: not only the complexities of hazard-free and monotone circuits for monotone Boolean functions do coincide, but every optimal hazard-free circuit for a monotone Boolean function must be monotone. Then we show that hazard-free circuit complexity of a very simple (non-monotone) Boolean function is super-polynomially larger than its unrestricted circuit complexity. This function accepts a Boolean n x n matrix iff every row and every column has exactly one 1-entry. Finally, we show that every Boolean function of n variables can be computed by a hazard-free circuit of size O(2^n/n).

Read more
Computational Complexity

Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t+1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system.

Read more
Computational Complexity

On d -distance m -tuple ( ℓ,r )-domination in graphs

In this article, we study the d -distance m -tuple ( ℓ,r )-domination problem. Given a simple undirected graph G=(V,E) , and positive integers d,m,ℓ and r , a subset V ′ ⊆V is said to be a d -distance m -tuple ( ℓ,r )-dominating set if it satisfies the following conditions: (i) each vertex v∈V is d -distance dominated by at least m vertices in V ′ , and (ii) each r size subset U of V is d -distance dominated by at least ℓ vertices in V ′ . Here, a vertex v is d -distance dominated by another vertex u means the shortest path distance between u and v is at most d in G . A set U is d -distance dominated by a set of ℓ vertices means size of the union of the d -distance neighborhood of all vertices of U in V ′ is at least ℓ . The objective of the d -distance m -tuple ( ℓ,r )-domination problem is to find a minimum size subset V ′ ⊆V satisfying the above two conditions. We prove that the problem of deciding whether a graph G has (i) a 1-distance m -tuple ( ℓ,r )-dominating set for each fixed value of m,ℓ , and r , and (ii) a d -distance m -tuple ( ℓ,2 )-dominating set for each fixed value of d(>1),m , and ℓ of cardinality at most k (here k is a positive integer) are NP-complete. We also prove that for any ε>0 , the 1-distance m -tuple (ℓ,r) -domination problem and the d -distance m -tuple (ℓ,2) -domination problem cannot be approximated within a factor of ( 1 2 −ε)ln|V| and ( 1 4 −ε)ln|V| , respectively, unless P=NP .

Read more
Computational Complexity

On Affine Reachability Problems

We analyze affine reachability problems in dimensions 1 and 2. We show that the reachability problem for 1-register machines over the integers with affine updates is PSPACE-hard, hence PSPACE-complete, strengthening a result by Finkel et al. that required polynomial updates. Building on recent results on two-dimensional integer matrices, we prove NP-completeness of the mortality problem for 2-dimensional integer matrices with determinants +1 and 0. Motivated by tight connections with 1-dimensional affine reachability problems without control states, we also study the complexity of a number of reachability problems in finitely generated semigroups of 2-dimensional upper-triangular integer matrices.

Read more
Computational Complexity

On Approximability of Clustering Problems Without Candidate Centers

The k-means objective is arguably the most widely-used cost function for modeling clustering tasks in a metric space. In practice and historically, k-means is thought of in a continuous setting, namely where the centers can be located anywhere in the metric space. For example, the popular Lloyd's heuristic locates a center at the mean of each cluster. Despite persistent efforts on understanding the approximability of k-means, and other classic clustering problems such as k-median and k-minsum, our knowledge of the hardness of approximation factors of these problems remains quite poor. In this paper, we significantly improve upon the hardness of approximation factors known in the literature for these objectives. We show that if the input lies in a general metric space, it is NP-hard to approximate: ∙ Continuous k-median to a factor of 2−o(1) ; this improves upon the previous inapproximability factor of 1.36 shown by Guha and Khuller (J. Algorithms '99). ∙ Continuous k-means to a factor of 4−o(1) ; this improves upon the previous inapproximability factor of 2.10 shown by Guha and Khuller (J. Algorithms '99). ∙ k-minsum to a factor of 1.415 ; this improves upon the APX-hardness shown by Guruswami and Indyk (SODA '03). Our results shed new and perhaps counter-intuitive light on the differences between clustering problems in the continuous setting versus the discrete setting (where the candidate centers are given as part of the input).

Read more
Computational Complexity

On Asymmetric Unification for the Theory of XOR with a Homomorphism

Asymmetric unification, or unification with irreducibility constraints, is a newly developed paradigm that arose out of the automated analysis of cryptographic protocols. However, there are still relatively few asymmetric unification algorithms. In this paper we address this lack by exploring the application of automata-based unification methods. We examine the theory of xor with a homomorphism, ACUNh, from the point of view of asymmetric unification, and develop a new automata-based decision procedure. Then, we adapt a recently developed asymmetric combination procedure to produce a general asymmetric- ACUNh decision procedure. Finally, we present a new approach for obtaining a solution-generating asymmetric-ACUNh unification automaton. We also compare our approach to the most commonly used form of asymmetric unification available today, variant unification.

Read more

Ready to get started?

Join us today