Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Heaphy is active.

Publication


Featured researches published by Robert Heaphy.


Computing in Science and Engineering | 2002

Zoltan data management services for parallel dynamic applications

Karen Dragon Devine; Erik G. Boman; Robert Heaphy; Bruce Hendrickson

The Zoltan library is a collection of data management services for parallel, unstructured, adaptive, and dynamic applications that is available as open-source software. It simplifies the load-balancing, data movement, unstructured-communication, and memory usage difficulties that arise in dynamic applications such as adaptive finite-element methods, particle methods, and crash simulations. Zoltans data-structure-neutral design also lets a wide range of applications use it without imposing restrictions on application data structures. Its object-based interface provides a simple and inexpensive way for application developers to use the library and researchers to make new capabilities available under a common interface.The Zoltan library is a collection of data management services for parallel, unstructured, adaptive, and dynamic applications that is available as open-source software from www.cs.sandia.gov/zoltan...


international parallel and distributed processing symposium | 2006

Parallel hypergraph partitioning for scientific computing

Karen Dragon Devine; Erik G. Boman; Robert Heaphy; Rob H. Bisseling

Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster


international parallel and distributed processing symposium | 2007

Hypergraph-based Dynamic Load Balancing for Adaptive Scientific Computations

Erik G. Boman; Karen Dragon Devine; Doruk Bozdag; Robert Heaphy; Lee Ann Riesen

Adaptive scientific computations require that periodic repartitioning (load balancing) occur dynamically to maintain load balance. Hypergraph partitioning is a successful model for minimizing communication volume in scientific computations, and partitioning software for the static case is widely available. In this paper, we present a new hypergraph model for the dynamic case, where we minimize the sum of communication in the application plus the migration cost to move data, thereby reducing total execution time. The new model can be solved using hypergraph partitioning with faced vertices. We describe an implementation of a parallel multilevel repartitioning algorithm within the Zoltan load-balancing toolkit, which to our knowledge is the first code for dynamic load balancing based on hypergraph partitioning. Finally, we present experimental results that demonstrate the effectiveness of our approach on a Linux cluster with up to 64 processors. Our new algorithm compares favorably to the widely used ParMETIS partitioning software in terms of quality, and would have reduced total execution time in most of our test cases.


international parallel and distributed processing symposium | 2009

A repartitioning hypergraph model for dynamic load balancing

Erik G. Boman; Karen Dragon Devine; Doruk Bozdag; Robert Heaphy; Lee Ann Riesen

In parallel adaptive applications, the computational structure of the applications changes over time, leading to load imbalances even though the initial load distributions were balanced. To restore balance and to keep communication volume low in further iterations of the applications, dynamic load balancing (repartitioning) of the changed computational structure is required. Repartitioning differs from static load balancing (partitioning) due to the additional requirement of minimizing migration cost to move data from an existing partition to a new partition. In this paper, we present a novel repartitioning hypergraph model for dynamic load balancing that accounts for both communication volume in the application and migration cost to move data, in order to minimize the overall cost. The use of a hypergraph-based model allows us to accurately model communication costs rather than approximate them with graph-based models. We show that the new model can be realized using hypergraph partitioning with fixed vertices and describe our parallel multilevel implementation within the Zoltan load balancing toolkit. To the best of our knowledge, this is the first implementation for dynamic load balancing based on hypergraph partitioning. To demonstrate the effectiveness of our approach, we conducted experiments on a Linux cluster with 1024 processors. The results show that, in terms of reducing total cost, our new model compares favorably to the graph-based dynamic load balancing approaches, and multilevel approaches improve the repartitioning quality significantly.


Archive | 2004

LDRD report : parallel repartitioning for optimal solver performance.

Robert Heaphy; Karen Dragon Devine; Robert Preis; Bruce Hendrickson; Michael A. Heroux; Erik G. Boman

We have developed infrastructure, utilities and partitioning methods to improve data partitioning in linear solvers and preconditioners. Our efforts included incorporation of data repartitioning capabilities from the Zoltan toolkit into the Trilinos solver framework, (allowing dynamic repartitioning of Trilinos matrices); implementation of efficient distributed data directories and unstructured communication utilities in Zoltan and Trilinos; development of a new multi-constraint geometric partitioning algorithm (which can generate one decomposition that is good with respect to multiple criteria); and research into hypergraph partitioning algorithms (which provide up to 56% reduction of communication volume compared to graph partitioning for a number of emerging applications). This report includes descriptions of the infrastructure and algorithms developed, along with results demonstrating the effectiveness of our approaches.


ieee international conference on high performance computing data and analytics | 2007

The Trilinos Software Lifecycle Model

James M. Willenbring; Michael A. Heroux; Robert Heaphy

The Trilinos Project is an effort to facilitate the design, development, integration and on-going support of mathematical solver libraries. Efforts range from research and development of new algorithms to proof-of-concept of new and existing algorithms to eventual production use of solver libraries on a variety of computer systems across a broad set of applications. Software quality assurance and engineering (SQA/SQE) play an integral role in the project. Although many formal software lifecycle models exist, no single model can address all Trilinos developer needs since our requirements for rigor change as a particular Trilinos package matures. In this report we present a three-phase promotional lifecycle model that closely matches the needs and realities of Trilinos development.


Proceedings of the 2007 workshop on Experimental computer science | 2007

EXACT: the experimental algorithmics computational toolkit

William E. Hart; Jonathan W. Berry; Robert Heaphy; Cynthia A. Phillips

In this paper, we introduce EXACT, the EXperimental Algorithmics Computational Toolkit. EXACT is a software framework for describing, controlling, and analyzing computer experiments. It provides the experimentalist with convenient software tools to ease and organize the entire experimental process, including the description of factors and levels, the design of experiments, the control of experimental runs, the archiving of results, and analysis of results. As a case study for EXACT, we describe its interaction with FAST, the Sandia Framework for Agile Software Testing. EXACT and FAST now manage the nightly testing of several large software projects at Sandia. We also discuss EXACTs advanced features, which include a driver module that controls complex experiments such as comparisons of parallel algorithms.


Applied Numerical Mathematics | 2005

New challanges in dynamic load balancing

Karen Dragon Devine; Erik G. Boman; Robert Heaphy; Bruce Hendrickson; James D. Teresco; Jamal Faik; Joseph E. Flaherty; Luis Gervasio


Archive | 2006

Parallel hypergraph partitioning for irregular problems.

Robert Heaphy; Karen Dragon Devine; Rob H. Bisseling; Erik G. Boman


Advanced Computational Infrastructures for Parallel and Distributed Adaptive Applications | 2009

Hypergraph‐Based Dynamic Partitioning and Load Balancing

Doruk Bozda¢g; Erik G. Boman; Karen Dragon Devine; Robert Heaphy; Lee Ann Riesen

Collaboration


Dive into the Robert Heaphy's collaboration.

Top Co-Authors

Avatar

Erik G. Boman

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Karen Dragon Devine

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Bruce Hendrickson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Lee Ann Riesen

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cynthia A. Phillips

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael A. Heroux

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge