Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erin Carson is active.

Publication


Featured researches published by Erin Carson.


Acta Numerica | 2014

Communication lower bounds and optimal algorithms for numerical linear algebra

Grey Ballard; Erin Carson; James Demmel; Mark Hoemmen; Nicholas Knight; Oded Schwartz

The traditional metric for the efficiency of a numerical algorithm has been the number of arithmetic operations it performs. Technological trends have long been reducing the time to perform an arithmetic operation, so it is no longer the bottleneck in many algorithms; rather, communication, or moving data, is the bottleneck. This motivates us to seek algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. In this paper we summarize recent progress in three aspects of this problem. First we describe lower bounds on communication. Some of these generalize known lower bounds for dense classical (O(n3)) matrix multiplication to all direct methods of linear algebra, to sequential and parallel algorithms, and to dense and sparse matrices. We also present lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices. Second, we compare these lower bounds to widely used versions of these algorithms, and note that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identify or invent new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrate large speed-ups in theory and practice.


SIAM Journal on Matrix Analysis and Applications | 2014

A Residual Replacement Strategy for Improving the Maximum Attainable Accuracy of s-step Krylov Subspace Methods

Erin Carson; James Demmel

Krylov subspace methods are a popular class of iterative methods for solving linear systems with large, sparse matrices. On modern computer architectures, both sequential and parallel performance of classical Krylov methods is limited by costly data movement, or communication, required to update the approximate solution in each iteration. These motivated communication-avoiding Krylov methods, based on


acm symposium on parallel algorithms and architectures | 2014

Tradeoffs between synchronization, communication, and computation in parallel linear algebra computations

Edgar Solomonik; Erin Carson; Nicholas Knight; James Demmel

s


SIAM Journal on Scientific Computing | 2013

Avoiding Communication in Nonsymmetric Lanczos-Based Krylov Subspace Methods

Erin Carson; Nicholas Knight; James Demmel

-step formulations, reduce data movement by a factor of


international parallel and distributed processing symposium | 2014

s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid

Samuel Williams; Michael J. Lijewski; Ann S. Almgren; Brian Van Straalen; Erin Carson; Nicholas Knight; James Demmel

O(s)


international conference on parallel processing | 2013

Exploiting Data Sparsity in Parallel Matrix Powers Computations

Nicholas Knight; Erin Carson; James Demmel

by reordering the computations in classical Krylov methods to exploit locality. Studies on the finite precision behavior of communication-avoiding Krylov methods in the literature have thus far been empirical in nature; in this work, we provide the first quantitative analysis of the maximum attainable accuracy of communication-avoiding Krylov subspace methods in finite precision. Following the analysis for classical Krylov methods, we derive a bound on the deviation of the true and updated residuals in communication-avoiding conjugate gradient...


SIAM Journal on Scientific Computing | 2017

A New Analysis of Iterative Refinement and its Application to Accurate Solution of Ill-Conditioned Sparse Linear Systems

Erin Carson; Nicholas J. Higham

This paper derives tradeoffs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. These tradeoffs are lower bounds on the execution time of the algorithm which are independent of the number of processors, but dependent on the problem size. Therefore, they provide lower bounds on the parallel execution time of any algorithm computed by a system composed of any number of homogeneous components, each with associated computational, communication, and synchronization payloads. We employ a theoretical model counts the amount of work and data movement as a maximum of any execution path during the parallel computation. By considering this metric, rather than the total communication volume over the whole machine, we obtain new insights into the characteristics of parallel schedules for algorithms with non-trivial dependency structures. We also present reductions from BSP and LogP algorithms to our execution model, extending our lower bounds to these two models of parallel computation. We first develop our results for general dependency graphs and hypergraphs based on their expansion properties, then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Gaussian elimination, and Krylov subspace methods. Our lower bound for LU factorization demonstrates the optimality of Tiskins LU algorithm answering an open question posed in his paper, as well as of the 2.5D LU algorithm which has analogous costs. We treat the computations in a general manner by noting that the computations share a similar dependency hypergraph structure and analyzing the communication requirements of lattice hypergraph structures.


parallel computing | 2016

Trade-Offs Between Synchronization, Communication, and Computation in Parallel Linear Algebra Computations

Edgar Solomonik; Erin Carson; Nicholas Knight; James Demmel

Krylov subspace methods are iterative methods for solving large, sparse linear systems and eigenvalue problems in a variety of scientific domains. On modern computer architectures, communication, or movement of data, takes much longer than the equivalent amount of computation. Classical formulations of Krylov subspace methods require data movement in each iteration, creating a performance bottleneck, and thus increasing runtime. This motivated


winter simulation conference | 2007

Using flexible points in a developing simulation of selective dissolution in alloys

Joseph C. Carnahan; Erin Carson; Paul F. Reynolds; S.A. Policastro; Robert G. Kelly

s


SIAM Journal on Scientific Computing | 2018

The Numerical Stability Analysis of Pipelined Conjugate Gradient Methods: Historical Context and Methodology

Erin Carson; Miroslav Rozložník; Zdeněk Strakoš; Petr Tichý; Miroslav Tůma

-step, or communication-avoiding, Krylov subspace methods, which only perform data movement every

Collaboration


Dive into the Erin Carson's collaboration.

Top Co-Authors

Avatar

James Demmel

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oded Schwartz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann S. Almgren

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Brian Van Straalen

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Grey Ballard

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge