Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Danny C. Sorensen is active.

Publication


Featured researches published by Danny C. Sorensen.


Siam Journal on Scientific and Statistical Computing | 1987

A fully parallel algorithm for the symmetric eigenvalue problem

Jack J. Dongarra; Danny C. Sorensen

In this paper we present a parallel algorithm for the symmetric algebraic eigenvalue problem. The algorithm is based upon a divide and conquer scheme suggested by Cuppen for computing the eigensystem of a symmetric tridiagonal matrix. We extend this idea to obtain a parallel algorithm that retains a number of active parallel processes that is greater than or equal to the initial number throughout the course of the computation. We give a new deflation technique which together with a robust root finding technique will assure computation of an eigensystem to full accuracy in the residuals and in the orthogonality of eigenvectors. A brief analysis of the numerical properties and sensitivity to round off error is presented to indicate where numerical difficulties may occur. The algorithm is able to exploit parallelism at all levels of the computation and is well suited to a variety of architectures.Computational results are presented for several machines. These results are very encouraging with respect to both...


Journal of Computational and Applied Mathematics | 1989

Block reduction of matrices to condensed forms for eigenvalue computations

Jack J. Dongarra; Danny C. Sorensen; Sven Hammarling

In this paper we describe block algorithms for the reduction of a real symmetric matrix to tridiagonal form and for the reduction of a general real matrix to either bidiagonal or Hessenberg form using Householder transformations. The approach is to aggregate the transformations and to apply them in a blocked fashion, thus achieving algorithms that are rich in matrix-matrix operations. These reductions to condensed form typically comprise a preliminary step in the computation of eigenvalues or singular values. With this in mind, we also demonstrate how the initial reduction to tridiagonal or bidiagonal form may be pipelined with the divide and conquer technique for computing the eigensystem of a symmetric matrix or the singular value decomposition of a general matrix to achieve algorithms which are load balanced and rich in matrix-matrix operations. 14 refs., 2 figs., 2 tabs.


Applied Mathematics and Computation | 1986

Linear algebra on high performance computers

Jack J. Dongarra; Danny C. Sorensen

This is a survey of some work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high performance computers. The numerical algorithms discussed are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved very successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR - factorization, and symmetric eigenvalue problems.


parallel computing | 1986

Implementation of some concurrent algorithms for matrix factorization

Jack J. Dongarra; Ahmed H. Sameh; Danny C. Sorensen

Abstract Three parallel algorithms for computing the QR-factorization of a matrix are presented. The discussion is primarily concerned with implementation of these algorithms on a computer that supports tightly coupled parallel processes sharing a large common memory. The three algorithms are a Householder method based upon high-level modules, a Windowed Householder method that avoids fork-join synchronization, and a Pipelined Givens method that is a variant of the data-flow type algorithms offering large enough granularity to mask synchronization costs. Numerical experiments were conducted on the Denelcor HEP computer. The computational results indicate that the Pipelined Givens method is preferred and that this is primarily due to the number of array references required by the various algorithms.


parallel computing | 1987

A portable environment for developing parallel FORTRAN programs

Jack J. Dongarra; Danny C. Sorensen

Abstract The emergence of commercially produced parallel computers has greatly increased the problem of producing transportable mathematical software. Exploiting these new parallel capabilities has led to extensions of existing languages such as FORTRAN and to proposals for the development of entirely new parallel languages. We present an attempt at a short term solution to the transportability problem. The motivation for developing the package has been to extend capabilities beyond loop based parallelism and to provide a convenient machine independent user interface. A package called SCHEDULE is described which provides a standard user interface to several shared memory parallel machines. A user writes standard FORTRAN code and calls SCHEDULE routines which express and enforce the large grain data dependencies of his parallel algorithm. Machine dependencies are internal to SCHEDULE and change from one machine to another but the users code remains essentially the same across all such machines. The semantics and usage of SCHEDULE are described and several examples of parallel algorithms which have been implemented using SCHEDULE are presented.


Mathematical Programming | 1982

A Note on the Computation of an Orthonormal Basis for the Null Space of a Matrix

Thomas F. Coleman; Danny C. Sorensen

A highly regarded method to obtain an orthonormal basis,Z, for the null space of a matrixAT is theQR decomposition ofA, whereQ is the product of Householder matrices. In several optimization contextsA(x) varies continuously withx and it is desirable thatZ(x) vary continuously also. In this note we demonstrate that thestandard implementation of theQR decomposition doesnot yield an orthonormal basisZ(x) whose elements vary continuously withx. We suggest three possible remedies.


parallel computing | 1988

Tools to aid in the analysis of memory access patterns for FORTRAN programs

Orlie Brewer; Jack J. Dongarra; Danny C. Sorensen

The goals of this work is to assist in formulating correct algorithms for high-performance computers and to aid as much as possible the process of translating an algorithm into an efficient implementation on a specific machine. Over the past five years we have developed approaches in the design of certain numerical algorithms that allow both efficient and portability. Our current efforts emphasize three areas: environments for algorithm development, parallel programming methodologies, and advanced algorithm development. 9 refs., 3 figs.


parallel computing | 1988

Programming methodology and performance issues for advanced computer architectures

Jack J. Dongarra; Danny C. Sorensen; Kathryn Connolly; Jim Patterson

Abstract This paper will describe some recent attempts to construct transportable numerical software for high-performance computers. Restructuring algorithms in terms of simple linear algebra modules is reviewed. This technique has proved very succesful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The use of modules to encapsulate parallelism and reduce the ratio of data movement to floating-point operations has been demonstrably effective for regular problems such as those found in dense linear algebra. In other situations it may be necessary to express explicitly parallel algorithms. We also present a programming methodology that is useful for constructing new parallel algorithms which require sophisticated synchronization at a large grain level. We describe the SCHEDULE package which provides an environment for developing and analyzing explicitly parallel programs in FORTRAN which are portable. This package now includes a preprocessor to achieve complete portability of user level code and also a graphics post processor for performance analysis and debugging. We discuss details of porting both the SCHEDULE package and user code. Examples from linear algebra, and partial differential equations are used to illustrate the utility of this approach.


ACM Signum Newsletter | 1981

An example concerning quasi-Newton estimation of a sparse hessian

Danny C. Sorensen

An example is given which shows that certain quasi-Newton methods for sparse optimization problems can be unstable.


Archive | 1989

A project for developing a linear algebra library for high-performance computers

J. Demmel; Jack J. Dongarra; Jeremy DuCroz; A. Greenbaum; S. Hammarling; Danny C. Sorensen

Argonne National Laboratory, the Courant Institute for Mathematical Sciences, and the Numerical Algorithms Group, Ltd., are developing a transportable linear algebra library in Fortran 77. The library is intended to provide a uniform set of subroutines to solve the most common linear algebra problems and to run efficiently on a wide range of high-performance computers.

Collaboration


Dive into the Danny C. Sorensen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack Dongarra

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Iain S. Duff

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Orlie Brewer

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Greenbaum

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Demmel

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

James Demmel

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge