Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Ostrouchov is active.

Publication


Featured researches published by Susan Ostrouchov.


parallel computing | 1995

A Proposal for a Set of Parallel Basic Linear Algebra Subprograms

Jaeyoung Choi; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; David W. Walker; R. Clinton Whaley

This paper describes a proposal for a set of Parallel Basic Linear Algebra Subprograms (PBLAS) for distributed memory MIMD computers. The PBLAS are targeted at distributed vector-vector, matrixvector and matrix-matrix operations with the aim of simplifying the parallelization of linear algebra codes, especially when implemented on top of the sequential BLAS.


Computer Physics Communications | 1996

ScaLAPACK: a portable linear algebra library for distributed memory computers — design issues and performance

Jaeyoung Choi; James Demmel; Inderjit S. Dhillon; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; K. Stanley; David W. Walker; R. C. Whaley

This paper outlines the content and performance of ScaLAPACK, a collection of mathematical software for linear algebra computations on distributed memory computers. The importance of developing standards for computational and message passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK, and indicate the difficulties inherent in producing correct codes for networks of heterogeneous processors. Finally, this paper briefly describes future directions for the ScaLAPACK library and concludes by suggesting alternative approaches to mathematical libraries, explaining how ScaLAPACK could be integrated into efficient and user-friendly distributed systems.


parallel computing | 1995

ScaLAPACK: A Portable Linear Algebra Library for Distributed Memory Computers - Design Issues and Performance

Jaeyoung Choi; James Demmel; Inderjit S. Dhillon; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; Ken Stanley; David W. Walker; R. Clinton Whaley

This paper outlines the content and performance of ScaLA-PACK, a collection of mathematical software for linear algebra computations on distributed memory computers. The importance of developing standards for computational and message passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK. This paper outlines the difficulties inherent in producing correct codes for networks of heterogeneous processors. Finally, this paper briefly describes future directions for the ScaLAPACK library and concludes by suggesting alternative approaches to mathematical libraries, explaining how ScaLAPACK could be integrated into efficient and user-friendly distributed systems.


Archive | 1992

LAPACK users' guide: Release 1. 0

E. Anderson; Zhaojun Bai; Christian H. Bischof; James Demmel; Jack J. Dongarra; J. Du Croz; A. Greenbaum; S. Hammarling; A. McKenney; Susan Ostrouchov; Danny C. Sorensen

LAPACK is a transportable library of Fortran 77 subroutines for solving the most common problems in numerical linear algebra: systems of linear equations, linear least squares problems, eigenvalue problems and singular value problems. LAPACK is designed to supersede LINPACK and EISPACK, principally by restructuring the software to achieve much greater efficiency on vector processors, high-performance ``superscalar`` workstations, and shared-memory multi-processors. LAPACK also adds extra functionality, uses some new or improved algorithms, and integrates the two sets of algorithms into a unified package. The LAPACK Users` Guide gives an informal introduction to the design of the algorithms and software, summarizes the contents of the package, describes conventions used in the software and documentation, and includes complete specifications for calling the routines. This edition of the Users` guide describes Release 1.0 of LAPACK.


parallel computing | 1995

The Design and Implementation of the Reduction Routines in ScaLAPACK

Jaeyoung Choi; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; David W. Walker; R. Clint Whaley

This chapter discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building block, and the use of Basic Linear Algebra Communication Subgrograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level, algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.


Archive | 1995

LAPACK User's Guide

Edward Anderson; Zhaojun Bai; C. Bishof; James Demmel; Jack J. Dongarra; Jeremy Du Croz; A. Greenbaum; Sven Hammarling; A. McKenney; Susan Ostrouchov; Danny C. Sorensen


Archive | 1992

LAPACK's user's guide

Edward Anderson; Zhaojun Bai; Christian H. Bischof; James Demmel; Jack J. Dongarra; J. Du Croz; A. Greenbaum; Sven Hammarling; A. McKenney; Susan Ostrouchov; Danny C. Sorensen


Archive | 1990

Lapack block factorization algorithms on the intel ipsc/860

Jack J. Dongarra; Susan Ostrouchov


siam conference on parallel processing for scientific computing | 1991

LAPACK for Distributed Memory Architectures: Progress Report

E. Anderson; Annamaria Benzoni; Jack J. Dongarra; Steve Moulton; Susan Ostrouchov; Bernard Tourancheau; Robert A. van de Geijn


distributed memory computing conference | 1991

Basic Linear Algebra Comrnunication Subprograms

E. Anderson; Annamaria Benzoni; Jack Dongarra; Steve Moulton; Susan Ostrouchov; Bernard Tourancheau; R.A. van de Geijn

Collaboration


Dive into the Susan Ostrouchov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack Dongarra

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

James Demmel

University of California

View shared research outputs
Top Co-Authors

Avatar

E. Anderson

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhaojun Bai

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge