Sven Hammarling
Numerical Algorithms Group
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sven Hammarling.
ACM Transactions on Mathematical Software | 1990
Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Iain S. Duff
This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrix-vector operations that should provide for efficient and portable implementations of algorithms for high-performance computers
Archive | 1997
L. S. Blackford; Jaeyoung Choi; Andrew J. Cleary; Eduardo F. D'Azevedo; James Demmel; Inderjit S. Dhillon; Jack J. Dongarra; Sven Hammarling; Greg Henry; Antoine Petitet; K. Stanley; David Walker; R. C. Whaley
Where you can find the scalapack users guide easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, thats not about who are reading this scalapack users guide book. It is about this book that will give wellness for all people from many societies.
ACM Transactions on Mathematical Software | 1988
Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Richard J. Hanson
This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrix-vector operations that should provide for efficient and portable implementations of algorithms for high-performance computers.
ACM Transactions on Mathematical Software | 2002
L. Susan Blackford; Antoine Petitet; Roldan Pozo; Karin A. Remington; R. Clint Whaley; James Demmel; Jack J. Dongarra; Iain S. Duff; Sven Hammarling; Greg Henry; Michael A. Heroux; Linda Kaufman; Andrew Lumsdaine
L. SUSAN BLACKFORD Myricom, Inc. JAMES DEMMEL University of California, Berkeley JACK DONGARRA The University of Tennessee IAIN DUFF Rutherford Appleton Laboratory and CERFACS SVEN HAMMARLING Numerical Algorithms Group, Ltd. GREG HENRY Intel Corporation MICHAEL HEROUX Sandia National Laboratories LINDA KAUFMAN William Patterson University ANDREW LUMSDAINE Indiana University ANTOINE PETITET Sun Microsystems ROLDAN POZO National Institute of Standards and Technology KARIN REMINGTON The Center for Advancement of Genomics and R. CLINT WHALEY Florida State University
ACM Transactions on Mathematical Software | 1988
Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Richard J. Hanson
This paper describes a model implementation and test software for the Level 2 Basic Linear Algebra Subprograms (Level 2 BLAS). Level 2 BLAS are targeted at matrix-vector operations with the aim of providing more efficient, but portable, implementations of algorithms on high-performance computers. The model implementation provides a portable set of FORTRAN 77 Level 2 BLAS for machines where specialized implementations do not exist or are not required. The test software aims to verify that specialized implementations meet the specification of Level 2 BLAS and that implementations are correctly installed.
ACM Transactions on Mathematical Software | 1990
Jack J. Dongarra; Jermey Du Cruz; Sven Hammarling; Iain S. Duff
This paper describes a model implementation and test software for the Level 2 Basic Linear Algebra Subprograms (Level 2 BLAS). Level 2 BLAS are targeted at matrix-vector operations with the aim of providing more efficient but portable, implementations of algorithms on high-performance computers. The model implementation provides a portable set of FORTRAN 77 Level 2 BLAS for machines where specialized implementations do not exists or are not required. The test software aims to verify that specialized implementations meet the specification of Level 2 BLAS that implementations are correctly installed
Journal of Computational and Applied Mathematics | 1989
Jack J. Dongarra; Danny C. Sorensen; Sven Hammarling
In this paper we describe block algorithms for the reduction of a real symmetric matrix to tridiagonal form and for the reduction of a general real matrix to either bidiagonal or Hessenberg form using Householder transformations. The approach is to aggregate the transformations and to apply them in a blocked fashion, thus achieving algorithms that are rich in matrix-matrix operations. These reductions to condensed form typically comprise a preliminary step in the computation of eigenvalues or singular values. With this in mind, we also demonstrate how the initial reduction to tridiagonal or bidiagonal form may be pipelined with the divide and conquer technique for computing the eigensystem of a symmetric matrix or the singular value decomposition of a general matrix to achieve algorithms which are load balanced and rich in matrix-matrix operations. 14 refs., 2 figs., 2 tabs.
Linear Algebra and its Applications | 1983
Åke Björck; Sven Hammarling
Abstract A fast and stable method for computing the square root X of a given matrix A ( X 2 = A ) is developed. The method is based on the Schur factorization A = QSQ H and uses a fast recursion to compute the upper triangular square root of S . It is shown that if α = ∥ X ∥ 2 /∥ A ∥ is not large, then the computed square root is the exact square root of a matrix close to A . The method is extended for computing the cube root of A . Matrices exist for which the square root computed by the Schur method is ill conditioned, but which nonetheless have well-conditioned square roots. An optimization approach is suggested for computing the well-conditioned square roots in these cases.
Mathematics of Computation | 1992
M. G Cox; Sven Hammarling; J. H. Wilkinson
List of contributors Gene Golub: Prologue: Reflections on Jim Wilkinson Beresford Parlett: Misconvergence in the Lanczos algorithm Charles L. Lawson & Kajal K. Gupta: The Lanczos algorithm for a pure imaginary Hermitian matrix James Demmel: Nearest defective matrices and the geometry of ill-conditioning Theo Beelen & Paul Van Dooren: Computational aspects of the Jordan canonical form C.C. Paige: Some aspects of generalized QR factorizations I.S. Duff, N.I.M. Gould, M. Lescrenier, & J.K. Reid: The multifrontal method in a parallel environment Philip E. Gill, Walter Murray, Michael A. Saunders, & Margaret H. Wright: A Schur-complement method for sparse quadratic programming Francoise Chatelin & Marie Christine Brunet: A probabilistic round-off error propagation model application to the eigenvalue problem Nicholas J. Higham: Analysis of the Cholesky decomposition of a semi-definite matrix James M. Varah: On the conditioning of parameter estimation problems F.W.J. Olver: Rounding errors in algebriac process - in level-index arithmetic Mario Arioli & Iain S. Duff: Experiments in tearing large sparse systems M.G. Cox: The least-squares solution of linear equations with block-angular observation matrix G.W. Stewart: An iterative method for solving linear inequalities Ake Bjorck Advances in reliable numerical computing Christian H. Reinsch: Software for shape-preserving spline interpolation D.A.H. Jacobs & G Markham: Experiences with some software engineering practices in numerical software Jack Dongarra & Sven Hammarling: Evolution of numerical software for dense linear algebra L. Fox: Epilogue: Jim Wilkinson some after-dinner sentiments.
ACM Signum Newsletter | 1985
Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Richard J. Hanson
This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions proposed are targeted at matrix vector operations which should provide for more efficient and portable implementations of algorithms for high performance computers.