Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Grey Ballard is active.

Publication


Featured researches published by Grey Ballard.


SIAM Journal on Matrix Analysis and Applications | 2011

Minimizing Communication in Linear Algebra

Grey Ballard; James Demmel; Olga Holtz; Oded Schwartz

In 1981 Hong and Kung proved a lower bound on the amount of communication (amount of data moved between a small, fast memory and large, slow memory) needed to perform dense, n-by-n matrix multiplication using the conventional O(n3) algorithm, where the input matrices were too large to fit in the small, fast memory. In 2004 Irony, Toledo, and Tiskin gave a new proof of this result and extended it to the parallel case (where communication means the amount of data moved between processors). In both cases the lower bound may be expressed as Ω(#arithmetic_operations/M), where M is the size of the fast memory (or local memory in the parallel case). Here we generalize these results to a much wider variety of algorithms, including LU factorization, Cholesky factorization, LDLT factorization, QR factorization, the Gram–Schmidt algorithm, and algorithms for eigenvalues and singular values, i.e., essentially all direct methods of linear algebra. The proof works for dense or sparse matrices and for sequential or para...


international parallel and distributed processing symposium | 2011

Communication-Avoiding QR Decomposition for GPUs

Michael J. Anderson; Grey Ballard; James Demmel; Kurt Keutzer

We describe an implementation of the Communication-Avoiding QR (CAQR) factorization that runs entirely on a single graphics processor (GPU). We show that the reduction in memory traffic provided by CAQR allows us to outperform existing parallel GPU implementations of QR for a large class of tall-skinny matrices. Other GPU implementations of QR handle panel factorizations by either sending the work to a general-purpose processor or using entirely bandwidth-bound operations, incurring data transfer overheads. In contrast, our QR is done entirely on the GPU using compute-bound kernels, meaning performance is good regardless of the width of the matrix. As a result, we outperform CULA, a parallel linear algebra library for GPUs by up to 17x for tall-skinny matrices and Intels Math Kernel Library (MKL) by up to 12x. We also discuss stationary video background subtraction as a motivating application. We apply a recent statistical approach, which requires many iterations of computing the singular value decomposition of a tall-skinny matrix. Using CAQR as a first step to getting the singular value decomposition, we are able to get the answer 3x faster than if we use a traditional bandwidth-bound GPU QR factorization tuned specifically for that matrix size, and 30x faster than if we use Intels Math Kernel Library (MKL) singular value decomposition routine on a multicore CPU.


acm symposium on parallel algorithms and architectures | 2013

Communication optimal parallel multiplication of sparse random matrices

Grey Ballard; Aydin Buluç; James Demmel; Laura Grigori; Benjamin Lipshitz; Oded Schwartz; Sivan Toledo

Parallel algorithms for sparse matrix-matrix multiplication typically spend most of their time on inter-processor communication rather than on computation, and hardware trends predict the relative cost of communication will only increase. Thus, sparse matrix multiplication algorithms must minimize communication costs in order to scale to large processor counts. In this paper, we consider multiplying sparse matrices corresponding to Erdős-Rényi random graphs on distributed-memory parallel machines. We prove a new lower bound on the expected communication cost for a wide class of algorithms. Our analysis of existing algorithms shows that, while some are optimal for a limited range of matrix density and number of processors, none is optimal in general. We obtain two new parallel algorithms and prove that they match the expected communication cost lower bound, and hence they are optimal.


Acta Numerica | 2014

Communication lower bounds and optimal algorithms for numerical linear algebra

Grey Ballard; Erin Carson; James Demmel; Mark Hoemmen; Nicholas Knight; Oded Schwartz

The traditional metric for the efficiency of a numerical algorithm has been the number of arithmetic operations it performs. Technological trends have long been reducing the time to perform an arithmetic operation, so it is no longer the bottleneck in many algorithms; rather, communication, or moving data, is the bottleneck. This motivates us to seek algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. In this paper we summarize recent progress in three aspects of this problem. First we describe lower bounds on communication. Some of these generalize known lower bounds for dense classical (O(n3)) matrix multiplication to all direct methods of linear algebra, to sequential and parallel algorithms, and to dense and sparse matrices. We also present lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices. Second, we compare these lower bounds to widely used versions of these algorithms, and note that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identify or invent new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrate large speed-ups in theory and practice.


acm symposium on parallel algorithms and architectures | 2012

Brief announcement: strong scaling of matrix multiplication algorithms and memory-independent communication lower bounds

Grey Ballard; James Demmel; Olga Holtz; Benjamin Lipshitz; Oded Schwartz

A parallel algorithm has perfect strong scaling if its running time on


SIAM Journal on Scientific Computing | 2010

Communication-optimal Parallel and Sequential Cholesky Decomposition

Grey Ballard; James Demmel; Olga Holtz; Oded Schwartz

P


international parallel and distributed processing symposium | 2016

Parallel Tensor Compression for Large-Scale Scientific Data

Woody Austin; Grey Ballard; Tamara G. Kolda

processors is linear in


SIAM Journal on Scientific Computing | 2016

Exploiting Multiple Levels of Parallelism in Sparse Matrix-Matrix Multiplication

Ariful Azad; Grey Ballard; Aydin Buluç; James Demmel; Laura Grigori; Oded Schwartz; Sivan Toledo; Samuel Williams

1/P


acm sigplan symposium on principles and practice of parallel programming | 2015

A framework for practical parallel fast matrix multiplication

Austin R. Benson; Grey Ballard

, including all communication costs. Distributed-memory parallel algorithms for matrix multiplication with perfect strong scaling have only recently been found. One is based on classical matrix multiplication (Solomonik and Demmel, 2011), and one is based on Strassens fast matrix multiplication (Ballard, Demmel, Holtz, Lipshitz, and Schwartz, 2012). Both algorithms scale perfectly, but only up to some number of processors where the inter-processor communication no longer scales. We obtain a memory-independent communication cost lower bound on classical and Strassen-based distributed-memory matrix multiplication algorithms. These bounds imply that no classical or Strassen-based parallel matrix multiplication algorithm can strongly scale perfectly beyond the ranges already attained by the two parallel algorithms mentioned above. The memory-independent bounds and the strong scaling bounds generalize to other algorithms.


acm symposium on parallel algorithms and architectures | 2011

Graph expansion and communication costs of fast matrix multiplication: regular submission

Grey Ballard; James Demmel; Olga Holtz; Oded Schwartz

Numerical algorithms have two kinds of costs: arithmetic and communication, by which we mean either moving data between levels of a memory hierarchy (in the sequential case) or over a network connecting processors (in the parallel case). Communication costs often dominate arithmetic costs, so it is of interest to design algorithms minimizing communication. In this paper we first extend known lower bounds on the communication cost (both for bandwidth and for latency) of conventional (

Collaboration


Dive into the Grey Ballard's collaboration.

Top Co-Authors

Avatar

James Demmel

University of California

View shared research outputs
Top Co-Authors

Avatar

Oded Schwartz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olga Holtz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Druinsky

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tamara G. Kolda

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mathias Jacquelin

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge