Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. Hammarling is active.

Publication


Featured researches published by S. Hammarling.


conference on high performance computing (supercomputing) | 1990

LAPACK: a portable linear algebra library for high-performance computers

E. Anderson; Z. Bai; Jack J. Dongarra; A. Greenbaum; A. McKenney; J. Du Croz; S. Hammarling; James Demmel; Christian H. Bischof; Danny C. Sorensen

The goal of the LAPACK project is to design and implement a portable linear algebra library for efficient use on a variety of high-performance computers. The library is based on the widely used LINPACK and EISPACK packages for solving linear equations, eigenvalue problems, and linear least-squares problems, but extends their functionality in a number of ways. The major methodology for making the algorithms run faster is to restructure them to perform block matrix operations (e.g., matrix-matrix multiplication) in their inner loops. These block operations may be optimized to exploit the memory hierarchy of a specific architecture. The LAPACK project is also working on new algorithms that yield higher relative accuracy for a variety of linear algebra problems. >


conference on high performance computing (supercomputing) | 1996

ScaLAPACK: A Portable Linear Algebra Library for Distributed Memory Computers - Design Issues and Performance

L. S. Blackford; Jaeyoung Choi; A. Cleary; Antoine Petitet; R. C. Whaley; James Demmel; Inderjit S. Dhillon; K. Stanley; Jack J. Dongarra; S. Hammarling; Greg Henry; David W. Walker

This paper outlines the content and performance of ScaLAPACK, a collection of mathematical software for linear algebra computations on distributed memory computers. The importance of developing standards for computational and message passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK, and indicate the difficulties inherent in producing correct codes for networks of heterogeneous processors. Finally, this paper briefly describes future directions for the ScaLAPACK library and concludes by suggesting alternative approaches to mathematical libraries, explaining how ScaLAPACK could be integrated into efficient and user-friendly distributed systems.


siam conference on parallel processing for scientific computing | 1987

A proposal for a set of level 3 basic linear algebra subprograms

Jack J. Dongarra; Jeremy Du Croz; Iain S. Duff; S. Hammarling

This paper describes a proposal for Level 3 Basic Linear Algebra Subprograms (Level 3 BLAS). The Level 3 BLAS are targeted at matrix-matrix operations with the aim of providing more efficient, but portable, implementations of algorithms on high-performance computers, especially those with hierarchical memory and parallel processing capability.


ACM Transactions on Mathematical Software | 1997

Practical experience in the numerical dangers of heterogeneous computing

L. S. Blackford; A. Cleary; Antoine Petitet; R. C. Whaley; James Demmel; Inderjit S. Dhillon; Huan Ren; K. Stanley; Jack J. Dongarra; S. Hammarling

Special challenges exist in writing reliable numerical library software for heterogeneous computing environments. Although a lot of software for distributed-memory parallel computers has been written, porting this software to a network of workstations requires careful consideration. The symptoms of heterogeneous computing failures can range from erroneous results without warning to deadlock. Some of the problems are straightforward to solve, but for others the solutions are not so obvious, or incur an unacceptable overhead. Making software robust on heterogeneous systems often requires additional communication. We describe and illustrate the problems encountered during the development of ScaLAPACK and the NAG Numerical PVM Library. Where possible, we suggest ways to avoid potential pitfalls, or if that is not possible, we recommend that the software not be used on heterogeneous networks.


Archive | 1992

LAPACK users' guide: Release 1. 0

E. Anderson; Zhaojun Bai; Christian H. Bischof; James Demmel; Jack J. Dongarra; J. Du Croz; A. Greenbaum; S. Hammarling; A. McKenney; Susan Ostrouchov; Danny C. Sorensen

LAPACK is a transportable library of Fortran 77 subroutines for solving the most common problems in numerical linear algebra: systems of linear equations, linear least squares problems, eigenvalue problems and singular value problems. LAPACK is designed to supersede LINPACK and EISPACK, principally by restructuring the software to achieve much greater efficiency on vector processors, high-performance ``superscalar`` workstations, and shared-memory multi-processors. LAPACK also adds extra functionality, uses some new or improved algorithms, and integrates the two sets of algorithms into a unified package. The LAPACK Users` Guide gives an informal introduction to the design of the algorithms and software, summarizes the contents of the package, describes conventions used in the software and documentation, and includes complete specifications for calling the routines. This edition of the Users` guide describes Release 1.0 of LAPACK.


Computers & Mathematics With Applications | 1998

Key concepts for parallel out-of-core LU factorization

Jack J. Dongarra; S. Hammarling; David W. Walker

Abstract This paper considers key ideas in the design of out-of-core dense LU factorization routines. A left-looking variant of the LU factorization algorithm is shown to require less I/O to disk than the right-looking variant, and is used to develop a parallel, out-of-core implementation. This implementation makes use of a small library of parallel I/O routines, together with ScaLAPACK and PBLAS routines. Results for runs on an Intel Paragon are presented and interpreted using a simple performance model.


parallel computing | 1996

Practical Experience in the Dangers of Heterogeneous Computing

Andrew J. Cleary; James Demmel; Inderjit Dhillon; Jack J. Dongarra; S. Hammarling; Antoine Petitet; H. Ren; Ken Stanley; R. Clinton Whaley

Special challenges exist in writing reliable numerical library software for heterogeneous computing environments. Although a lot of software for distributed memory parallel computers has been written, porting this software to a network of workstations requires careful consideration. The symptoms of heterogeneous computing failures can range from erroneous results without warning to deadlock. Some of the problems are straightforward to solve, but for others the solutions are not so obvious, or incur an unacceptable overhead. Making software robust on heterogeneous systems often requires additional communication.


Archive | 1989

A project for developing a linear algebra library for high-performance computers

J. Demmel; Jack J. Dongarra; Jeremy DuCroz; A. Greenbaum; S. Hammarling; Danny C. Sorensen

Argonne National Laboratory, the Courant Institute for Mathematical Sciences, and the Numerical Algorithms Group, Ltd., are developing a transportable linear algebra library in Fortran 77. The library is intended to provide a uniform set of subroutines to solve the most common linear algebra problems and to run efficiently on a wide range of high-performance computers.


Proceedings of the IFIP TC2/WG2.5 working conference on Quality of numerical software: assessment and enhancement | 1997

Case studies on the development of ScaLAPACK and the NAG Numerical PVM Library

Jack J. Dongarra; S. Hammarling; Antoine Petitet

In this paper we look at the development of ScaLAPACK, a software library for dense and banded numerical linear algebra, and the NAG Numerical PVM Library, which includes software for dense and sparse linear algebra, quadrature, optimization and random number generation. Both libraries are aimed at distributed memory machines, including networks of workstations. The paper concentrates on the underlying design and the testing of the libraries.


Archive | 1988

LAPACK Working Note #5 : Provisional Contents

Chris Bischof; James Demmel; Jack Dongarra; Jeremy Du Croz; A. Greenbaum; S. Hammarling; Danny C. Sorensen

Collaboration


Dive into the S. Hammarling's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Demmel

University of California

View shared research outputs
Top Co-Authors

Avatar

J. Du Croz

Numerical Algorithms Group

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Anderson

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Jerzy Waśniewski

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Vincent A. Barker

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Christian H. Bischof

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge