Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Druinsky is active.

Publication


Featured researches published by Alex Druinsky.


Journal of the ACM | 2015

Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization

Haim Avron; Alex Druinsky; Anshul Gupta

Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mirankers pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on proving convergence in the limit. Comparison of the asynchronous convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the convergence rate and prove that it is linear, and is close to that of the methods synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.


parallel computing | 2016

Hypergraph Partitioning for Sparse Matrix-Matrix Multiplication

Grey Ballard; Alex Druinsky; Nicholas Knight; Oded Schwartz

We propose a fine-grained hypergraph model for sparse matrix-matrix multiplication (SpGEMM), a key computational kernel in scientific computing and data analysis whose performance is often communication bound. This model correctly describes both the interprocessor communication volume along a critical path in a parallel computation and also the volume of data moving through the memory hierarchy in a sequential computation. We show that identifying a communication-optimal algorithm for particular input matrices is equivalent to solving a hypergraph partitioning problem. Our approach is nonzero structure dependent, meaning that we seek the best algorithm for the given input matrices. In addition to our three-dimensional fine-grained model, we also propose coarse-grained one-dimensional and two-dimensional models that correspond to simpler SpGEMM algorithms. We explore the relations between our models theoretically, and we study their performance experimentally in the context of three applications that use SpGEMM as a key computation. For each application, we find that at least one coarse-grained model is as communication efficient as the fine-grained model. We also observe that different applications have affinities for different algorithms. Our results demonstrate that hypergraphs are an accurate model for reasoning about the communication costs of SpGEMM as well as a practical tool for exploring the SpGEMM algorithm design space.


acm symposium on parallel algorithms and architectures | 2015

Brief Announcement: Hypergraph Partitioning for Parallel Sparse Matrix-Matrix Multiplication

Grey Ballard; Alex Druinsky; Nicholas Knight; Oded Schwartz

The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computation to improve application-specific algorithms for multiplying sparse matrices.


SIAM Journal on Matrix Analysis and Applications | 2014

Communication-Avoiding Symmetric-Indefinite Factorization

Grey Ballard; Dulcenia Becker; James Demmel; Jack J. Dongarra; Alex Druinsky; Inon Peled; Oded Schwartz; Sivan Toledo; Ichitaro Yamazaki

We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasens triangular tridiagonalization. It factors a dense symmetric matrix


international parallel and distributed processing symposium | 2013

Implementing a Blocked Aasen's Algorithm with a Dynamic Scheduler on Multicore Architectures

Grey Ballard; Dulceneia Becker; James Demmel; Jack J. Dongarra; Alex Druinsky; Inon Peled; Oded Schwartz; Sivan Toledo; Ichitaro Yamazaki

A


SIAM Journal on Matrix Analysis and Applications | 2016

Improving the numerical stability of fast matrix multiplication

Grey Ballard; Austin R. Benson; Alex Druinsky; Benjamin Lipshitz; Oded Schwartz

as the product


SIAM Journal on Matrix Analysis and Applications | 2011

The Growth-Factor Bound for the Bunch-Kaufman Factorization Is Tight

Alex Druinsky; Sivan Toledo

A=PLTL^{T}P^{T},


Numerical Linear Algebra With Applications | 2018

Wilkinson's inertia-revealing factorization and its application to sparse matrices: Sparse inertia-revealing factorization

Alex Druinsky; Eyal Carlebach; Sivan Toledo

where


international conference on conceptual structures | 2016

Tuning the Coarse Space Construction in a Spectral AMG Solver1

Osni Marques; Alex Druinsky; Xiaoye S. Li; Andrew T. Barker; Panayot S. Vassilevski; Delyan Kalchev

P


international conference on parallel processing | 2015

Comparative Performance Analysis of Coarse Solvers for Algebraic Multigrid on Multicore and Manycore Architectures

Alex Druinsky; Pieter Ghysels; Xiaoye S. Li; Osni Marques; Samuel Williams; Andrew T. Barker; Delyan Kalchev; Panayot S. Vassilevski

is a permutation matrix,

Collaboration


Dive into the Alex Druinsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grey Ballard

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Oded Schwartz

University of California

View shared research outputs
Top Co-Authors

Avatar

Andrew T. Barker

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Delyan Kalchev

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge