Featured Researches

Computational Complexity

Digraph homomorphism problem and weak near unanimity polymorphism

We consider the problem of finding a homomorphism from an input digraph G to a fixed digraph H . We show that if H admits a weak near unanimity polymorphism ϕ then deciding whether G admits a homomorphism to H (HOM( H )) is polynomial-time solvable. This gives proof of the dichotomy conjecture (now dichotomy theorem) by Feder and Vardi. Our approach is combinatorial, and it is simpler than the two algorithms found by Bulatov and Zhuk. We have implemented our algorithm and show some experimental results. We use our algorithm together with the recent result [38] for recognition of Maltsev polymorphisms and decide in polynomial time if a given relational structure R admits a weak near unanimity polymorphism.

Read more
Computational Complexity

Direct Product Primality Testing of Graphs is GI-hard

We investigate the computational complexity of the graph primality testing problem with respect to the direct product (also known as Kronecker, cardinal or tensor product). In [1] Imrich proves that both primality testing and a unique prime factorization can be determined in polynomial time for (finite) connected and nonbipartite graphs. The author states as an open problem how results on the direct product of nonbipartite, connected graphs extend to bipartite connected graphs and to disconnected ones. In this paper we partially answer this question by proving that the graph isomorphism problem is polynomial-time many-one reducible to the graph compositeness testing problem (the complement of the graph primality testing problem). As a consequence of this result, we prove that the graph isomorphism problem is polynomial-time Turing reducible to the primality testing problem. Our results show that connectedness plays a crucial role in determining the computational complexity of the graph primality testing problem.

Read more
Computational Complexity

Disjointness through the Lens of Vapnik-Chervonenkis Dimension: Sparsity and Beyond

The disjointness problem - where Alice and Bob are given two subsets of {1,…,n} and they have to check if their sets intersect - is a central problem in the world of communication complexity. While both deterministic and randomized communication complexities for this problem are known to be Θ(n) , it is also known that if the sets are assumed to be drawn from some restricted set systems then the communication complexity can be much lower. In this work, we explore how communication complexity measures change with respect to the complexity of the underlying set system. The complexity measure for the set system that we use in this work is the Vapnik-Chervonenkis (VC) dimension. More precisely, on any set system with VC dimension bounded by d , we analyze how large can the deterministic and randomized communication complexities be, as a function of d and n . In this paper, we construct two natural set systems of VC dimension d , motivated from geometry. Using these set systems we show that the deterministic and randomized communication complexity can be Θ ˜ (dlog(n/d)) for set systems of VC dimension d and this matches the deterministic upper bound for all set systems of VC dimension d . We also study the deterministic and randomized communication complexities of the set intersection problem when sets belong to a set system of bounded VC dimension. We show that there exists set systems of VC dimension d such that both deterministic and randomized (one-way and multi-round) complexity for the set intersection problem can be as high as Θ(dlog(n/d)) , and this is tight among all set systems of VC dimension d .

Read more
Computational Complexity

Distributed Verifiers in PCP

Traditional proof systems involve a resource-bounded verifier communicating with a powerful (but untrusted) prover. Distributed verifier proof systems are a new family of proof models that involve a network of verifier nodes communicating with a single independent prover that has access to the complete network structure of the verifiers. The prover is tasked with convincing all verifiers of some global property of the network graph. In addition, each individual verifier may be given some input string they will be required to verify during the course of computation. Verifier nodes are allowed to exchange messaged with nodes a constant distance away, and accept / reject the input after some computation. Because individual nodes are limited to a local view, communication with the prover is potentially necessary to prove global properties about the network graph of nodes, which only the prover has access to. In this system of models, the entire model accepts the input if and only if every individual node has accepted. There are three models in the distributed verifier proof system family: LCP , dIP , and our proposed dPCP , with the fundamental difference between these coming from the type of communication established between the verifiers and the prover. In this paper, we will first go over the past work in the LCP and dIP space before showing properties and proofs in our dPCP system.

Read more
Computational Complexity

Dynamic Complexity of Expansion

Dynamic Complexity was introduced by Immerman and Patnaik \cite{PatnaikImmerman97} (see also \cite{DongST95}). It has seen a resurgence of interest in the recent past, see \cite{DattaHK14,ZeumeS15,MunozVZ16,BouyerJ17,Zeume17,DKMSZ18,DMVZ18,BarceloRZ18,DMSVZ19,SchmidtSVZK20,DKMTVZ20} for some representative examples. Use of linear algebra has been a notable feature of some of these papers. We extend this theme to show that the gap version of spectral expansion in bounded degree graphs can be maintained in the class $\DynACz$ (also known as $\dynfo$, for domain independent queries) under batch changes (insertions and deletions) of O( logn loglogn ) many edges. The spectral graph theoretic material of this work is based on the paper by Kale-Seshadri \cite{KaleS11}. Our primary technical contribution is to maintain up to logarithmic powers of the transition matrix of a bounded degree undirected graph in $\DynACz$.

Read more
Computational Complexity

Edge Matching with Inequalities, Triangles, Unknown Shape, and Two Players

We analyze the computational complexity of several new variants of edge-matching puzzles. First we analyze inequality (instead of equality) constraints between adjacent tiles, proving the problem NP-complete for strict inequalities but polynomial for nonstrict inequalities. Second we analyze three types of triangular edge matching, of which one is polynomial and the other two are NP-complete; all three are #P-complete. Third we analyze the case where no target shape is specified, and we merely want to place the (square) tiles so that edges match (exactly); this problem is NP-complete. Fourth we consider four 2-player games based on 1×n edge matching, all four of which are PSPACE-complete. Most of our NP-hardness reductions are parsimonious, newly proving #P and ASP-completeness for, e.g., 1×n edge matching.

Read more
Computational Complexity

Efficient Circuit Simulation in MapReduce

The MapReduce framework has firmly established itself as one of the most widely used parallel computing platforms for processing big data on tera- and peta-byte scale. Approaching it from a theoretical standpoint has proved to be notoriously difficult, however. In continuation of Goodrich et al.'s early efforts, explicitly espousing the goal of putting the MapReduce framework on footing equal to that of long-established models such as the PRAM, we investigate the obvious complexity question of how the computational power of MapReduce algorithms compares to that of combinational Boolean circuits commonly used for parallel computations. Relying on the standard MapReduce model introduced by Karloff et al. a decade ago, we develop an intricate simulation technique to show that any problem in NC (i.e., a problem solved by a logspace-uniform family of Boolean circuits of polynomial size and a depth polylogarithmic in the input size) can be solved by a MapReduce computation in O(T(n)/ log n) rounds, where n is the input size and T(n) is the depth of the witnessing circuit family. Thus, we are able to closely relate the standard, uniform NC hierarchy modeling parallel computations to the deterministic MapReduce hierarchy DMRC by proving that NC^(i+1) is contained in DMRC^i for all natural i, including 0. Besides the theoretical significance, this result that has important applied aspects as well. In particular, we show for all problems in NC^1---many practically relevant ones such as integer multiplication and division, the parity function, and recognizing balanced strings of parentheses being among these---how to solve them in a constant number of deterministic MapReduce rounds.

Read more
Computational Complexity

Efficient Document Exchange and Error Correcting Codes with Asymmetric Information

We study two fundamental problems in communication, Document Exchange (DE) and Error Correcting Code (ECC). In the first problem, two parties hold two strings, and one party tries to learn the other party's string through communication. In the second problem, one party tries to send a message to another party through a noisy channel, by adding some redundant information to protect the message. Two important goals in both problems are to minimize the communication complexity or redundancy, and to design efficient protocols or codes. Both problems have been studied extensively. In this paper we study whether asymmetric partial information can help in these two problems. We focus on the case of Hamming distance/errors, and the asymmetric partial information is modeled by one party having a vector of disjoint subsets S → =( S 1 ,⋯, S t ) of indices and a vector of integers k → =( k 1 ,⋯, k t ) , such that in each S i the Hamming distance/errors is at most k i . We establish both lower bounds and upper bounds in this model, and provide efficient randomized constructions that achieve a min{O( t 2 ),O((loglogn ) 2 )} factor within the optimum, with almost linear running time. We further show a connection between the above document exchange problem and the problem of document exchange under edit distance, and use our techniques to give an efficient randomized protocol with optimal communication complexity and \emph{exponentially} small error for the latter. This improves the previous result by Haeupler \cite{haeupler2018optimal} (FOCS'19) and that by Belazzougui and Zhang \cite{BelazzouguiZ16} (FOCS'16). Our techniques are based on a generalization of the celebrated expander codes by Sipser and Spielman \cite{sipser1996expander}, which may be of independent interests.

Read more
Computational Complexity

Efficient List-Decoding with Constant Alphabet and List Sizes

We present an explicit and efficient algebraic construction of capacity-achieving list decodable codes with both constant alphabet and constant list sizes. More specifically, for any R∈(0,1) and ϵ>0 , we give an algebraic construction of an infinite family of error-correcting codes of rate R , over an alphabet of size (1/ϵ ) O(1/ ϵ 2 ) , that can be list decoded from a (1−R−ϵ) -fraction of errors with list size at most exp(poly(1/ϵ)) . Moreover, the codes can be encoded in time poly(1/ϵ,n) , the output list is contained in a linear subspace of dimension at most poly(1/ϵ) , and a basis for this subspace can be found in time poly(1/ϵ,n) . Thus, both encoding and list decoding can be performed in fully polynomial-time poly(1/ϵ,n) , except for pruning the subspace and outputting the final list which takes time exp(poly(1/ϵ))⋅poly(n) . Our codes are quite natural and structured. Specifically, we use algebraic-geometric (AG) codes with evaluation points restricted to a subfield, and with the message space restricted to a (carefully chosen) linear subspace. Our main observation is that the output list of AG codes with subfield evaluation points is contained in an affine shift of the image of a block-triangular-Toeplitz (BTT) matrix, and that the list size can potentially be reduced to a constant by restricting the message space to a BTT evasive subspace, which is a large subspace that intersects the image of any BTT matrix in a constant number of points. We further show how to explicitly construct such BTT evasive subspaces, based on the explicit subspace designs of Guruswami and Kopparty (Combinatorica, 2016), and composition.

Read more
Computational Complexity

Efficient methods to determine the reversibility of general 1D linear cellular automata in polynomial complexity

In this paper, we study reversibility of one-dimensional(1D) linear cellular automata(LCA) under null boundary condition, whose core problems have been divided into two main parts: calculating the period of reversibility and verifying the reversibility in a period. With existing methods, the time and space complexity of these two parts are still too expensive to be employed. So the process soon becomes totally incalculable with a slightly big size, which greatly limits its application. In this paper, we set out to solve these two problems using two efficient algorithms, which make it possible to solve reversible LCA of very large size. Furthermore, we provide an interesting perspective to conversely generate 1D LCA from a given period of reversibility. Due to our methods' efficiency, we can calculate the reversible LCA with large size, which has much potential to enhance security in cryptography system.

Read more

Ready to get started?

Join us today