Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arturs Backurs is active.

Publication


Featured researches published by Arturs Backurs.


symposium on the theory of computing | 2015

Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)

Arturs Backurs; Piotr Indyk

The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time. In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be {tight}. Specifically, we show that, if the edit distance can be computed in time O(n2-δ) for some constant δ>0, then the satisfiability of conjunctive normal form formulas with N variables and M clauses can be solved in time MO(1) 2(1-ε)N for a constant ε>0. The latter result would violate the Strong Exponential Time Hypothesis, which postulates that such algorithms do not exist.


foundations of computer science | 2015

Tight Hardness Results for LCS and Other Sequence Similarity Measures

Amir Abboud; Arturs Backurs; Virginia Vassilevska Williams

Two important similarity measures between sequences are the longest common subsequence (LCS) and the dynamic time warping distance (DTWD). The computations of these measures for two given sequences are central tasks in a variety of applications. Simple dynamic programming algorithms solve these tasks in O(n2) time, and despite an extensive amount of research, no algorithms with significantly better worst case upper bounds are known. In this paper, we show that for any constant ε >0, an O(n2-ε) time algorithm for computing the LCS or the DTWD of two sequences of length n over a constant size alphabet, refutes the popular Strong Exponential Time Hypothesis (SETH).


arXiv: Quantum Physics | 2012

Search by Quantum Walks on Two-Dimensional Grid without Amplitude Amplification

Andris Ambainis; Arturs Backurs; Nikolajs Nahimovs; Raitis Ozols; Alexander Rivosh

We study search by quantum walk on a finite two dimensional grid. The algorithm of Ambainis, Kempe, Rivosh [AKR05] uses \(O(\sqrt{N \log{N}})\) steps and finds a marked location with probability O(1 / logN) for grid of size \(\sqrt{N} \times \sqrt{N}\). This probability is small, thus [AKR05] needs amplitude amplification to get Θ(1) probability. The amplitude amplification adds an additional \(O(\sqrt{\log{N}})\) factor to the number of steps, making it \(O(\sqrt{N} \log{N})\).


symposium on discrete algorithms | 2016

Subtree isomorphism revisited

Amir Abboud; Arturs Backurs; Thomas Dueholm Hansen; Virginia Vassilevska Williams; Or Zamir

The Subtree Isomorphism problem asks whether a given tree is contained in another given tree. The problem is of fundamental importance and has been studied since the 1960s. For some variants, e.g., ordered trees, near-linear time algorithms are known, but for the general case truly subquadratic algorithms remain elusive. Our first result is a reduction from the Orthogonal Vectors problem to Subtree Isomorphism, showing that a truly subquadratic algorithm for the latter refutes the Strong Exponential Time Hypothesis (SETH). In light of this conditional lower bound, we focus on natural special cases for which no truly subquadratic algorithms are known. We classify these cases against the quadratic barrier, showing in particular that: • Even for binary, rooted trees, a truly subquadratic algorithm refutes SETH. • Even for rooted trees of depth O(log log n), where n is the total number of vertices, a truly subquadratic algorithm refutes SETH. • For every constant d, there is a constant εd> 0 and a randomized, truly subquadratic algorithm for degree-d rooted trees of depth at most (1+ εd) logdn. In particular, there is an O(min { 2.85h ,n2 }) algorithm for binary trees of depth h. Our reductions utilize new “tree gadgets” that are likely useful for future SETH-based lower bounds for problems on trees. Our upper bounds apply a folklore result from randomized decision tree complexity.


symposium on discrete algorithms | 2017

Better approximations for tree sparsity in nearly-linear time

Arturs Backurs; Piotr Indyk; Ludwig Schmidt

The Tree Sparsity problem is defined as follows: given a node-weighted tree of size n and an integer k, output a rooted subtree of size k with maximum weight. The best known algorithm solves this problem in time O(kn), i.e., quadratic in the size of the input tree for k = Θ(n). In this work, we design (1+e)-approximation algorithms for the Tree Sparsity problem that run in nearly-linear time. Unlike prior algorithms for this problem, our results offer single criterion approximations, i.e., they do not increase the sparsity of the output solution, and work for arbitrary trees (not only balanced trees). We also provide further algorithms for this problem with different runtime vs approximation trade-offs. Finally, we show that if the exact version of the Tree Sparsity problem can be solved in strongly subquadratic time, then the (min, +) convolution problem can be solved in strongly subquadratic time as well. The latter is a well- studied problem for which no strongly subquadratic time algorithm is known.


symposium on theoretical aspects of computer science | 2013

Optimal quantum query bounds for almost all Boolean functions.

Andris Ambainis; Arturs Backurs; Juris Smotrovs; Ronald de Wolf

We show that almost all n-bit Boolean functions have bounded-error quantum query complexity at least n/2, up to lower-order terms. This improves over an earlier n/4 lower bound of Ambainis (A. Ambainis, 1999), and shows that van Dams oracle interrogation (W. van Dam, 1998) is essentially optimal for almost all functions. Our proof uses the fact that the acceptance probability of a T-query algorithm can be written as the sum of squares of degree-T polynomials.


conference on computational complexity | 2014

On the Sum of L1 Influences

Arturs Backurs; Mohammad Bavarian

For a function f over the discrete cube, the total L1 influence of f is defined as the sum of the L1 norm of the discrete derivatives of f in all n directions. In this work, we show that in the case of bounded functions this quantity can be upper bounded by a polynomial in the degree of f (independently of dimension n), resolving affirmatively an open problem of Aaronson and Ambainis (ITCS 2011). We also give an application of our theorem to graph theory, and discuss the connection between the study of bounded functions over the cube and the quantum query complexity of partial functions where Aaronson and Ambainis encountered this question.


symposium on principles of database systems | 2016

Fast Algorithms for Parsing Sequences of Parentheses with Few Errors

Arturs Backurs; Krzysztof Onak

We consider the problem of fixing sequences of unbalanced parentheses. A classic algorithm based on dynamic programming computes the optimum sequence of edits required to solve the problem in cubic time. We show the first algorithm that runs in linear time when the number of necessary edits is small. More precisely, our algorithm runs in O(n) + dO(1) time, where n is the length of the sequence to be fixed and d is the minimum number of edits. The problem of fixing parentheses sequences is related to the task of repairing semi-structured documents such as XML and JSON.


symposium on the theory of computing | 2018

Towards tight approximation bounds for graph diameter and eccentricities

Arturs Backurs; Liam Roditty; Gilad Segal; Virginia Vassilevska Williams; Nicole Wein

Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs.


foundations of computer science | 2017

Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve

Amir Abboud; Arturs Backurs; Karl Bringmann; Marvin Künnemann

Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(&#x22C5;). The na&#x00EF;ve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:&#x2022; The O(nN&#x221A;log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)&#x2022; Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.&#x2022; We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.

Collaboration


Dive into the Arturs Backurs's collaboration.

Top Co-Authors

Avatar

Piotr Indyk

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christos Tzamos

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ludwig Schmidt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohammad Bavarian

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Woodruff

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge