Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aydin Buluç is active.

Publication


Featured researches published by Aydin Buluç.


ieee international conference on high performance computing data and analytics | 2011

The Combinatorial BLAS: design, implementation, and applications

Aydin Buluç; John R. Gilbert

This paper presents a scalable high-performance software library to be used for graph analysis and data mining. Large combinatorial graphs appear in many applications of high-performance computing, including computational biology, informatics, analytics, web search, dynamical systems, and sparse matrix methods. Graph computations are difficult to parallelize using traditional approaches due to their irregular nature and low operational intensity. Many graph computations, however, contain sufficient coarse-grained parallelism for thousands of processors, which can be uncovered by using the right primitives. We describe the parallel Combinatorial BLAS, which consists of a small but powerful set of linear algebra primitives specifically targeting graph and data mining applications. We provide an extensible library interface and some guiding principles for future development. The library is evaluated using two important graph algorithms, in terms of both performance and ease-of-use. The scalability and raw performance of the example applications, using the Combinatorial BLAS, are unprecedented on distributed memory clusters.


acm symposium on parallel algorithms and architectures | 2009

Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks

Aydin Buluç; Jeremy T. Fineman; Matteo Frigo; John R. Gilbert; Charles E. Leiserson

This paper introduces a storage format for sparse matrices, called <b><i>compressed sparse blocks (CSB)</i></b>, which allows both <i>Ax</i> and <i>A</i>,<i>x</i> to be computed efficiently in parallel, where <i>A</i> is an <i>n</i>×<i>n</i> sparse matrix with <i>nnz</i>e<i>n</i> nonzeros and <i>x</i> is a dense <i>n</i>-vector. Our algorithms use Θ(<i>nnz</i>) work (serial running time) and Θ(√<i>n</i>lg<i>n</i>) span (critical-path length), yielding a parallelism of Θ(<i>nnz</i>/√<i>n</i>lg<i>n</i>), which is amply high for virtually any large matrix. The storage requirement for CSB is the same as that for the more-standard compressed-sparse-rows (CSR) format, for which computing <i>Ax</i> in parallel is easy but <i>A</i>,<i>x</i> is difficult. Benchmark results indicate that on one processor, the CSB algorithms for <i>Ax</i> and <i>A</i>,<i>x</i> run just as fast as the CSR algorithm for <i>Ax</i>, but the CSB algorithms also scale up linearly with processors until limited by off-chip memory bandwidth.


ieee international conference on high performance computing data and analytics | 2011

Parallel breadth-first search on distributed memory systems

Aydin Buluç; Kamesh Madduri

Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned parallel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix partitioning-based approach that mitigates parallel communication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execution regimes in which these approaches will be competitive, and we demonstrate extremely high performance on leading distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD MagnyCours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.


Genome Biology | 2015

A whole-genome shotgun approach for assembling and anchoring the hexaploid bread wheat genome

Jarrod Chapman; Martin Mascher; Aydin Buluç; Kerrie Barry; Evangelos Georganas; Adam Session; Veronika Strnadova; Jerry Jenkins; Sunish K. Sehgal; Leonid Oliker; Jeremy Schmutz; Katherine A. Yelick; Uwe Scholz; Robbie Waugh; Jesse Poland; Gary J. Muehlbauer; Nils Stein; Daniel S. Rokhsar

Polyploid species have long been thought to be recalcitrant to whole-genome assembly. By combining high-throughput sequencing, recent developments in parallel computing, and genetic mapping, we derive, de novo, a sequence assembly representing 9.1 Gbp of the highly repetitive 16 Gbp genome of hexaploid wheat, Triticum aestivum, and assign 7.1 Gb of this assembly to chromosomal locations. The genome representation and accuracy of our assembly is comparable or even exceeds that of a chromosome-by-chromosome shotgun assembly. Our assembly and mapping strategy uses only short read sequencing technology and is applicable to any species where it is possible to construct a mapping population.


arXiv: Data Structures and Algorithms | 2016

Recent Advances in Graph Partitioning

Aydin Buluç; Henning Meyerhenke; Ilya Safro; Peter Sanders; Christian Schulz

We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.


SIAM Journal on Scientific Computing | 2012

Parallel Sparse Matrix-Matrix Multiplication and Indexing: Implementation and Experiments

Aydin Buluç; John R. Gilbert

Generalized sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. Here we show that SpGEMM also yields efficient algorithms for general sparse-matrix indexing in distributed memory, provided that the underlying SpGEMM implementation is sufficiently flexible and scalable. We demonstrate that our parallel SpGEMM methods, which use two-dimensional block data distributions with serial hypersparse kernels, are indeed highly flexible, scalable, and memory-efficient in the general case. This algorithm is the first to yield increasing speedup on an unbounded number of processors; our experiments show scaling up to thousands of processors in a variety of test scenarios.


parallel computing | 2010

Solving path problems on the GPU

Aydin Buluç; John R. Gilbert; Ceren Budak

We consider the computation of shortest paths on Graphic Processing Units (GPUs). The blocked recursive elimination strategy we use is applicable to a class of algorithms (such as all-pairs shortest-paths, transitive closure, and LU decomposition without pivoting) having similar data access patterns. Using the all-pairs shortest-paths problem as an example, we uncover potential gains over this class of algorithms. The impressive computational power and memory bandwidth of the GPU make it an attractive platform to run such computationally intensive algorithms. Although improvements over CPU implementations have previously been achieved for those algorithms in terms of raw speed, the utilization of the underlying computational resources was quite low. We implemented a recursively partitioned all-pairs shortest-paths algorithm that harnesses the power of GPUs better than existing implementations. The alternate schedule of path computations allowed us to cast almost all operations into matrix-matrix multiplications on a semiring. Since matrix-matrix multiplication is highly optimized and has a high ratio of computation to communication, our implementation does not suffer from the premature saturation of bandwidth resources as iterative algorithms do. By increasing temporal locality, our implementation runs more than two orders of magnitude faster on an NVIDIA 8800 GPU than on an Opteron. Our work provides evidence that programmers should rethink algorithms instead of directly porting them to GPU.


international conference on parallel processing | 2008

Challenges and Advances in Parallel Sparse Matrix-Matrix Multiplication

Aydin Buluç; John R. Gilbert

We identify the challenges that are special to parallel sparse matrix-matrix multiplication (PSpGEMM). We show that sparse algorithms are not as scalable as their dense counterparts, because in general, there are not enough non-trivial arithmetic operations to hide the communication costs as well as the sparsity overheads. We analyze the scalability of 1D and 2D algorithms for PSpGEMM. While the 1D algorithm is a variant of existing implementations, 2D algorithms presented are completely novel. Most of these algorithms are based on the previous research on parallel dense matrix multiplication. We also provide results from preliminary experiments with 2D algorithms.


international parallel and distributed processing symposium | 2011

Reduced-Bandwidth Multithreaded Algorithms for Sparse Matrix-Vector Multiplication

Aydin Buluç; Samuel Williams; Leonid Oliker; James Demmel

On multicore architectures, the ratio of peak memory bandwidth to peak floating-point performance (byte:flop ratio) is decreasing as core counts increase, further limiting the performance of bandwidth limited applications. Multiplying a sparse matrix (as well as its transpose in the unsymmetric case) with a dense vector is the core of sparse iterative methods. In this paper, we present a new multithreaded algorithm for the symmetric case which potentially cuts the bandwidth requirements in half while exposing lots of parallelism in practice. We also give a new data structure transformation, called bit masked register blocks, which promises significant reductions on bandwidth requirements by reducing the number of indexing elements without introducing additional fill-in zeros. Our work shows how to incorporate this transformation into existing parallel algorithms (both symmetric and unsymmetric) without limiting their parallel scalability. Experimental results indicate that the combined benefits of bit masked register blocks and the new symmetric algorithm can be as high as a factor of 3.5x in multicore performance over an already scalable parallel approach. We also provide a model that accurately predicts the performance of the new methods, showing that even larger performance gains are expected in future multicore systems as current trends (decreasing byte:flop ratio and larger sparse matrices) continue.


international parallel and distributed processing symposium | 2008

On the representation and multiplication of hypersparse matrices

Aydin Buluç; John R. Gilbert

Multicore processors are marking the beginning of a new era of computing where massive parallelism is available and necessary. Slightly slower but easy to parallelize kernels are becoming more valuable than sequentially faster kernels that are unscalable when parallelized. In this paper, we focus on the multiplication of sparse matrices (SpGEMM). We first present the issues with existing sparse matrix representations and multiplication algorithms that make them unscalable to thousands of processors. Then, we develop and analyze two new algorithms that overcome these limitations. We consider our algorithms first as the sequential kernel of a scalable parallel sparse matrix multiplication algorithm and second as part of a polyalgorithm for SpGEMM that would execute different kernels depending on the sparsity of the input matrices. Such a sequential kernel requires a new data structure that exploits the hypersparsity of the individual submatrices owned by a single processor after the 2D partitioning. We experimentally evaluate the performance and characteristics of our algorithms and show that they scale significantly better than existing kernels.

Collaboration


Dive into the Aydin Buluç's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ariful Azad

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Leonid Oliker

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Katherine A. Yelick

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Evangelos Georganas

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Samuel Williams

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy Kepner

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam Lugowski

University of California

View shared research outputs
Top Co-Authors

Avatar

Carl Yang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge