Featured Researches

Computational Complexity

Imperfect Gaps in Gap-ETH and PCPs

We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a new way to transform a PCP with imperfect completeness to a PCP with perfect completeness when the initial gap is a constant. In particular, we show that PCP c,s [r,q]⊆ PCP 1,1−Ω(1) [r+O(1),q+O(r)] , for c−s=Ω(1) . This implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NTIME[O(n)] with a O(logn) additive loss in the query complexity q . We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs (when completeness is imperfect), analogous to questions studied in parallel repetition and pseudorandomness. We also investigate the time complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. we show that Gap-3SAT, where the gap is not around 1, has a subexponential algorithm, if and only if, Gap-3SAT with perfect completeness has subexponential algorithms. We also relate the time complexities of these two problems in a more fine-grained way, to show that T 2 (n)≤ T 1 (n(loglogn ) O(1) ) , where T 1 (n), T 2 (n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness, respectively.

Read more
Computational Complexity

Impossibility Results for Grammar-Compressed Linear Algebra

To handle vast amounts of data, it is natural and popular to compress vectors and matrices. When we compress a vector from size N down to size n≪N , it certainly makes it easier to store and transmit efficiently, but does it also make it easier to process? In this paper we consider lossless compression schemes, and ask if we can run our computations on the compressed data as efficiently as if the original data was that small. That is, if an operation has time complexity T(inputsize) , can we perform it on the compressed representation in time T(n) rather than T(N) ? We consider the most basic linear algebra operations: inner product, matrix-vector multiplication, and matrix multiplication. In particular, given two compressed vectors, can we compute their inner product in time O(n) ? Or perhaps we must decompress first and then multiply, spending Ω(N) time? The answer depends on the compression scheme. While for simple ones such as Run-Length-Encoding (RLE) the inner product can be done in O(n) time, we prove that this is impossible for compressions from a richer class: essentially n 2 or even larger runtimes are needed in the worst case (under complexity assumptions). This is the class of grammar-compressions containing most popular methods such as the Lempel-Ziv family. These schemes are more compressing than the simple RLE, but alas, we prove that performing computations on them is much harder.

Read more
Computational Complexity

Improved hardness for H-colourings of G-colourable graphs

We present new results on approximate colourings of graphs and, more generally, approximate H-colourings and promise constraint satisfaction problems. First, we show NP-hardness of colouring k -colourable graphs with ( k ⌊k/2⌋ )−1 colours for every k≥4 . This improves the result of Bulín, Krokhin, and Opršal [STOC'19], who gave NP-hardness of colouring k -colourable graphs with 2k−1 colours for k≥3 , and the result of Huang [APPROX-RANDOM'13], who gave NP-hardness of colouring k -colourable graphs with 2 k 1/3 colours for sufficiently large k . Thus, for k≥4 , we improve from known linear/sub-exponential gaps to exponential gaps. Second, we show that the topology of the box complex of H alone determines whether H-colouring of G-colourable graphs is NP-hard for all (non-bipartite, H-colourable) G. This formalises the topological intuition behind the result of Krokhin and Opršal [FOCS'19] that 3-colouring of G-colourable graphs is NP-hard for all (3-colourable, non-bipartite) G. We use this technique to establish NP-hardness of H-colouring of G-colourable graphs for H that include but go beyond K 3 , including square-free graphs and circular cliques (leaving K 4 and larger cliques open). Underlying all of our proofs is a very general observation that adjoint functors give reductions between promise constraint satisfaction problems.

Read more
Computational Complexity

In-memory eigenvector computation in time O(1)

In-memory computing with crosspoint resistive memory arrays has gained enormous attention to accelerate the matrix-vector multiplication in the computation of data-centric applications. By combining a crosspoint array and feedback amplifiers, it is possible to compute matrix eigenvectors in one step without algorithmic iterations. In this work, time complexity of the eigenvector computation is investigated, based on the feedback analysis of the crosspoint circuit. The results show that the computing time of the circuit is determined by the mismatch degree of the eigenvalues implemented in the circuit, which controls the rising speed of output voltages. For a dataset of random matrices, the time for computing the dominant eigenvector in the circuit is constant for various matrix sizes, namely the time complexity is O(1). The O(1) time complexity is also supported by simulations of PageRank of real-world datasets. This work paves the way for fast, energy-efficient accelerators for eigenvector computation in a wide range of practical applications.

Read more
Computational Complexity

Inapproximability Results for Scheduling with Interval and Resource Restrictions

In the restricted assignment problem, the input consists of a set of machines and a set of jobs each with a processing time and a subset of eligible machines. The goal is to find an assignment of the jobs to the machines minimizing the makespan, that is, the maximum summed up processing time any machine receives. Herein, jobs should only be assigned to those machines on which they are eligible. It is well-known that there is no polynomial time approximation algorithm with an approximation guarantee of less than 1.5 for the restricted assignment problem unless P=NP. In this work, we show hardness results for variants of the restricted assignment problem with particular types of restrictions. In the case of interval restrictions the machines can be totally ordered such that jobs are eligible on consecutive machines. We resolve the open question of whether the problem admits a polynomial time approximation scheme (PTAS) in the negative (unless P=NP). There are several special cases of this problem known to admit a PTAS. Furthermore, we consider a variant with resource restriction where each machine has capacities and each job demands for a fixed number of resources. A job is eligible on a machine if its demand is at most the capacity of the machine for each resource. For one resource, this problem is known to admit a PTAS, for two, the case of interval restrictions is contained, and in general, the problem is closely related to unrelated scheduling with a low rank processing time matrix. We show that there is no polynomial time approximation algorithm with a rate smaller than 48/47 or 1.5 for scheduling with resource restrictions with 2 or 4 resources, respectively, unless P=NP. All our results can be extended to the so called Santa Claus variants of the problems where the goal is to maximize the minimal processing time any machine receives.

Read more
Computational Complexity

Incorporating Weisfeiler-Leman into algorithms for group isomorphism

In this paper we combine many of the standard and more recent algebraic techniques for testing isomorphism of finite groups (GpI) with combinatorial techniques that have typically been applied to Graph Isomorphism. In particular, we show how to combine several state-of-the-art GpI algorithms for specific group classes into an algorithm for general GpI, namely: composition series isomorphism (Rosenbaum-Wagner, Theoret. Comp. Sci., 2015; Luks, 2015), recursively-refineable filters (Wilson, J. Group Theory, 2013), and low-genus GpI (Brooksbank-Maglione-Wilson, J. Algebra, 2017). Recursively-refineable filters -- a generalization of subgroup series -- form the skeleton of this framework, and we refine our filter by building a hypergraph encoding low-genus quotients, to which we then apply a hypergraph variant of the k-dimensional Weisfeiler-Leman technique. Our technique is flexible enough to readily incorporate additional hypergraph invariants or additional characteristic subgroups.

Read more
Computational Complexity

Inner Product Oracle can Estimate and Sample

Edge estimation problem in unweighted graphs using local and sometimes global queries is a fundamental problem in sublinear algorithms. It has been observed by Goldreich and Ron (Random Structures & Algorithms, 2008), that weighted edge estimation for weighted graphs require Ω(n) local queries, where n denotes the number of vertices in the graph. To handle this problem, we introduce a new inner product query on matrices. Inner product query generalizes and unifies all previously used local queries on graphs used for estimating edges. With this new query, we show that weighted edge estimation in graphs with particular kind of weights can be solved using sublinear queries, in terms of the number of vertices. We also show that using this query we can solve the problem of the bilinear form estimation, and the problem of weighted sampling of entries of matrices induced by bilinear forms. This work is the first step towards weighted edge estimation mentioned in Goldreich and Ron (Random Structures & Algorithms, 2008).

Read more
Computational Complexity

Insignificant Choice Polynomial Time

In the late 1980s Gurevich conjectured that there is no logic capturing PTIME, where logic has to be understood in a very general way comprising computation models over structures. In this article we first refute Gurevich's conjecture. For this we extend the seminal research of Blass, Gurevich and Shelah on {\em choiceless polynomial time} (CPT), which exploits deterministic Abstract State Machines (ASMs) supporting unbounded parallelism to capture the choiceless fragment of PTIME. CPT is strictly included in PTIME. We observe that choice is unavoidable, but that a restricted version suffices, which guarantees that the final result is independent from the choice. Such a version of polynomially bounded ASMs, which we call {\em insignificant choice polynomial time} (ICPT) will indeed capture PTIME. Even more, insignificant choice can be captured by ASMs with choice restricted to atoms such that a {\em local insignificance condition} is satisfied. As this condition can be expressed in the logic of non-deterministic ASMs, we obtain a logic capturing PTIME. Furthermore, using inflationary fixed-points we can capture problems in PTIME by fixed-point formulae in a fragment of the logic of non-deterministic ASMs plus inflationary fixed-points. We use this result for our second contribution showing that PTIME differs from NP. For the proof we build again on the research on CPT first establishing a limitation on permutation classes of the sets that can be activated by an ICPT computation. We then prove an inseparability theorem, which characterises classes of structures that cannot be separated by the logic. In particular, this implies that SAT cannot be decided by an ICPT computation.

Read more
Computational Complexity

Integer Programming and Incidence Treedepth

Recently a strong connection has been shown between the tractability of integer programming (IP) with bounded coefficients on the one side and the structure of its constraint matrix on the other side. To that end, integer linear programming is fixed-parameter tractable with respect to the primal (or dual) treedepth of the Gaifman graph of its constraint matrix and the largest coefficient (in absolute value). Motivated by this, Koutecký, Levin, and Onn [ICALP 2018] asked whether it is possible to extend these result to a more broader class of integer linear programs. More formally, is integer linear programming fixed-parameter tractable with respect to the incidence treedepth of its constraint matrix and the largest coefficient (in absolute value)? We answer this question in negative. In particular, we prove that deciding the feasibility of a system in the standard form, Ax=b,l≤x≤u , is NP -hard even when the absolute value of any coefficient in A is 1 and the incidence treedepth of A is 5. Consequently, it is not possible to decide feasibility in polynomial time even if both the assumed parameters are constant, unless P=NP . Moreover, we complement this intractability result by showing tractability for natural and only slightly more restrictive settings, namely: (1) treedepth with an additional bound on either the maximum arity of constraints or the maximum number of occurrences of variables and (2) the vertex cover number.

Read more
Computational Complexity

Integer factorization and Riemann's hypothesis: Why two-item joint replenishment is hard

Distribution networks with periodically repeating events often hold great promise to exploit economies of scale. Joint replenishment problems are a fundamental model in inventory management, manufacturing, and logistics that capture these effects. However, finding an efficient algorithm that optimally solves these models, or showing that none may exist, has long been open, regardless of whether empty joint orders are possible or not. In either case, we show that finding optimal solutions to joint replenishment instances with just two products is at least as difficult as integer factorization. To the best of the authors' knowledge, this is the first time that integer factorization is used to explain the computational hardness of any kind of optimization problem. Under the assumption that Riemann's Hypothesis is correct, we can actually prove that the two-item joint replenishment problem with possibly empty joint ordering points is NP-complete under randomized reductions, which implies that not even quantum computers may be able to solve it efficiently. By relating the computational complexity of joint replenishment to cryptography, prime decomposition, and other aspects of prime numbers, a similar approach may help to establish (integer factorization) hardness of additional open periodic problems in supply chain management and beyond, whose solution has eluded standard methods.

Read more

Ready to get started?

Join us today