Alan B. Williams
Sandia National Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan B. Williams.
ACM Transactions on Mathematical Software | 2005
Michael A. Heroux; Roscoe A. Bartlett; Vicki E. Howle; Robert J. Hoekstra; Jonathan Joseph Hu; Tamara G. Kolda; Richard B. Lehoucq; Kevin R. Long; Roger P. Pawlowski; Eric Todd Phipps; Andrew G. Salinger; Heidi K. Thornquist; Ray S. Tuminaro; James M. Willenbring; Alan B. Williams; Kendall S. Stanley
The Trilinos Project is an effort to facilitate the design, development, integration, and ongoing support of mathematical software libraries within an object-oriented framework for the solution of large-scale, complex multiphysics engineering and scientific problems. Trilinos addresses two fundamental issues of developing software for these problems: (i) providing a streamlined process and set of tools for development of new algorithmic implementations and (ii) promoting interoperability of independently developed software.Trilinos uses a two-level software structure designed around collections of packages. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. Packages exist underneath the Trilinos top level, which provides a common look-and-feel, including configuration, documentation, licensing, and bug-tracking.Here we present the overall Trilinos design, describing our use of abstract interfaces and default concrete implementations. We discuss the services that Trilinos provides to a prospective package and how these services are used by various packages. We also illustrate how packages can be combined to rapidly develop new algorithms. Finally, we discuss how Trilinos facilitates high-quality software engineering practices that are increasingly required from simulation software.
Archive | 2009
Sandia Report; Michael A. Heroux; Douglas W. Doerfler; Paul S. Crozier; James M. Willenbring; H. Carter Edwards; Alan B. Williams; Mahesh Rajan; Eric R. Keiter; Heidi K. Thorn; Robert W. Numrich
Application performance is determined by a combination of many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, we find that the use of mini-applications - small self-contained proxies for real applications - is an excellent approach for rapidly exploring the parameter space of all these choices. Furthermore, use of mini-applications enriches the interaction between application, library and computer system developers by providing explicit functioning software and concrete performance results that lead to detailed, focused discussions of design trade-offs, algorithm choices and runtime performance issues. In this paper we discuss a collection of mini-applications and demonstrate how we use them to analyze and improve application performance on new and future computer platforms.
parallel, distributed and network-based processing | 2010
Christopher G. Baker; Michael A. Heroux; H. Carter Edwards; Alan B. Williams
Multicore nodes have become ubiquitous in just a few years. At the same time, writing portable parallel software for multicore nodes is extremely challenging. Widely available programming models such as OpenMP and Pthreads are not useful for devices such as graphics cards, and more flexible programming models such as RapidMind are only available commercially. OpenCL represents the first truly portable standard, but its availability is limited. In the presence of such transition, we have developed a minimal application programming interface (API) for multicore nodes that allows us to write portable parallel linear algebra software that can use any of the aforementioned programming models and any future standard models. We utilize C++ template meta-programming to enable users to write parallel kernels that can be executed on a variety of node types, including Cell, GPUs and multicore CPUs. The support for a parallel node is provided by implementing a Node object, according to the requirements specified by the API. This ability to provide custom support for particular node types gives developers a level of control not allowed by the current slate of proprietary parallel programming APIs. We demonstrate implementations of the API for a simple vector dot-product on sequential CPU, multicore CPU and GPU nodes.
ieee international conference on high performance computing data and analytics | 2011
Richard F. Barrett; Michael A. Heroux; Paul Lin; Alan B. Williams
Application performance is determined by a combination of many choices: hardware plat-form, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, we find that the use of mini-applications - small self-contained proxies for real applications - is an excellent approach for rapidly exploring the parameter space of all these choices. Furthermore, use of mini-applications enriches the interaction between application, library and computer system developers by providing explicit functioning software and concrete performance results that lead to detailed, focused discussions of design trade-offs, algorithm choices and runtime performance issues. In this poster we discuss a collection of mini-applications and demonstrate how we use them to analyze and improve application performance on new and future computer platforms
SIAM Journal on Scientific Computing | 1998
Kevin Burrage; Jocelyne Erhel; Bert Pohl; Alan B. Williams
Iterative methods for solving linear systems of equations can be very efficient if the structure of the coefficient matrix can be exploited to accelerate the convergence of the iterative process. However, for classes of problems for which suitable preconditioners cannot be found or for which the iteration scheme does not converge, iterative techniques may be inappropriate. This paper proposes a technique for deflating the eigenvalues and associated eigenvectors of the iteration matrix which either slow down convergence or cause divergence. This process is completely general and works by approximating the eigenspace
Archive | 2010
David G. Baur; Harold Carter Edwards; William K. Cochran; Alan B. Williams; Gregory D. Sjaardema
{\Bbb P}
Other Information: PBD: 1 Apr 1999 | 1999
Alan B. Williams; Benjamin A. Allan; Kyran D. Mish; Robert L. Clay
corresponding to the unstable or slowly converging modes and then applying a coupled iteration scheme on
Numerical Algorithms | 2008
Roger B. Sidje; Alan B. Williams; Kevin Burrage
{\Bbb P}
Concurrency and Computation: Practice and Experience | 2015
Paul Lin; Michael A. Heroux; Richard F. Barrett; Alan B. Williams
and its orthogonal complement
Other Information: PBD: 1 Apr 1999 | 1999
Alan B. Williams; Ivan J. Otero; Kyran D. Mish; Lee M. Tayor; Robert L. Clay
{\Bbb Q}