Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rasmus Kyng is active.

Publication


Featured researches published by Rasmus Kyng.


symposium on the theory of computing | 2014

Solving SDD linear systems in nearly m log 1/2 n time

Michael B. Cohen; Rasmus Kyng; Gary L. Miller; Jakub W. Pachocki; Richard Peng; Anup B. Rao; Shen Chen Xu

We show an algorithm for solving symmetric diagonally dominant (SDD) linear systems with m non-zero entries to a relative error of ε in O(m log1/2 n logc n log(1/ε)) time. Our approach follows the recursive preconditioning framework, which aims to reduce graphs to trees using iterative methods. We improve two key components of this framework: random sampling and tree embeddings. Both of these components are used in a variety of other algorithms, and our approach also extends to the dual problem of computing electrical flows. We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. In the graph setting, this leads to ultra-sparsifiers that have optimal behavior in expectation. The improved running time makes previous low stretch embedding algorithms the running time bottleneck in this framework. In our analysis, we relax the requirement of these embeddings to snowflake spaces. We then obtain a two-pass approach algorithm for constructing optimal embeddings in snowflake spaces that runs in O(m log log n) time. This algorithm is also readily parallelizable.


symposium on the theory of computing | 2016

Sparsified Cholesky and multigrid solvers for connection laplacians

Rasmus Kyng; Yin Tat Lee; Richard Peng; Sushant Sachdeva; Daniel A. Spielman

We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in connection Laplacians---a generalization of Laplacian matrices that arise in many problems in image and signal processing. We also prove that every connection Laplacian has a linear sized approximate inverse. This is an LU factorization with a linear number of nonzero entries that is a strong approximation of the original matrix. Using such a factorization one can solve systems of equations in a connection Laplacian in linear time. Such a factorization was unknown even for ordinary graph Laplacians.


foundations of computer science | 2016

Approximate Gaussian Elimination for Laplacians - Fast, Sparse, and Simple

Rasmus Kyng; Sushant Sachdeva

We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.


symposium on discrete algorithms | 2017

A framework for analyzing resparsification algorithms

Rasmus Kyng; Jakub W. Pachocki; Richard Peng; Sushant Sachdeva

A spectral sparsifier of a graph G is a sparser graph H that approximately preserves the quadratic form of G, i.e., for all vectors x, xTLGx ≈ xTLHx, where LG and LH denote the respective graph Laplacians. Spectral sparsifiers generalize cut sparsifiers, and have found many applications in designing graph algorithms. In recent years, there has been interest in computing spectral sparsifiers in semi-streaming and dynamic settings. Natural algorithms in these settings often involve repeated sparsification of a graph, and in turn accumulation of errors across these steps. We present a framework for analyzing algorithms that perform repeated sparsifications that only incur error corresponding to a single sparsification step, leading to better results for many of these reseparsification based algorithms. As an application, we show how to maintain a spectral sparsifier in the semi-streaming setting: We present a simple algorithm that, for a graph G on n vertices and m edges, computes a spectral sparsifier of G with O(n log n) edges in a single pass over G, using only O(n log n) space, and O(m log2 n) total time. This improves on previous best semi-streaming algorithms for both spectral and cut sparsifiers by a factor of log n in both space and runtime. The algorithm also extends to semi-streaming row sampling for general PSD matrices. As another application, we use this framework to combine a spectral sparsification algorithm by Koutis with improved spanner constructions to give a parallel algorithm for constructing O(n log2 n log log n) sized spectral sparsifiers in O(m log2 n log log n) time. This is the best combinatorial graph sparsification algorithm to date, and the size of the sparsifiers produced is only a factor log n log log n more than ones produced by numerical routines.


foundations of computer science | 2017

Hardness Results for Structured Linear Systems

Rasmus Kyng; Peng Zhang

We show that if the nearly-linear time solvers for Laplacian matrices and their generalizations can be extended to solve just slightly larger families of linear systems, then they can be used to quickly solve all systems of linear equations over the reals. This result can be viewed either positively or negatively: either we will develop nearly-linear time algorithms for solving all systems of linear equations over the reals, or progress on the families we can solve in nearly-linear time will soon halt.


symposium on the theory of computing | 2018

Incomplete nested dissection

Rasmus Kyng; Richard Peng; Robert Schwieterman; Peng Zhang

We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3 n5/3 log(1 / є)). This asymptotically improves the running time O(n2) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.


conference on learning theory | 2015

Algorithms for Lipschitz Learning on Graphs

Rasmus Kyng; Anup Rao; Sushant Sachdeva; Daniel A. Spielman


neural information processing systems | 2015

Fast, provable algorithms for Isotonic regression in all ℓ- p -norms

Rasmus Kyng; Anup Rao; Sushant Sachdeva


symposium on the theory of computing | 2017

Sampling random spanning trees faster than matrix multiplication

David Durfee; Rasmus Kyng; John Peebles; Anup Rao; Sushant Sachdeva


arXiv: Data Structures and Algorithms | 2014

Preconditioning in Expectation.

Michael B. Cohen; Rasmus Kyng; Jakub W. Pachocki; Richard Peng; Anup Rao

Collaboration


Dive into the Rasmus Kyng's collaboration.

Top Co-Authors

Avatar

Richard Peng

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakub W. Pachocki

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael B. Cohen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Peebles

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peng Zhang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge