Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Walter Hoffmann is active.

Publication


Featured researches published by Walter Hoffmann.


Computing | 1989

Iterative algorithms for Gram-Schmidt orthogonalization

Walter Hoffmann

The algorithms that are treated in this paper are based on the classical and the modified Gram-Schmidt algorithms. It is shown that Gram-Schmidt orthogonalization for constructing aQR factorization should be carried out iteratively to obtain a matrixQ that is orthogonal in almost full working precision. In the formulation of the algorithms, the parts that express manipulations with matrices or vectors are clearly identified to enable an optimal implementation of the algorithms on parallel and/or vector machines. An extensive error analysis is presented. It shows, for instance, that the iterative classical algorithm is not inferior to the iterative modified algorithm when full precision ofQ is required. Experiments are reported to support the outcomes of the analysis.ZusammenfassungIn diesem Artikel werden verschiedene Varianten der klassischen und der modifizierten Gram-Schmidt-Methode präsentiert. Wir zeigen, daß man für die Konstruktion derQR-Zerlegung die Gram-Schmidt-Orthogonalisierung iterativ anwenden muß, falls man die MatrixQ ungefähr bis auf Maschinengenauigkeit orthogonal haben will. Die Algorithmen sind so formuliert, daß man alle Operationen mit Matrizen oder Vektoren deutlich identifizieren kann und eine Implementierung auf einem Parallel- oder Vektorcomputer keine Schwierigkeiten bietet. Eine ausführliche Fehleranalyse wird gegeben. Daraus folgt zum Beispiel, daß der iterative klassische Algorithmus nicht schlechter ist als der iterative modifizierte Algorithmus, wenn die MatrixQ so genau wie möglich orthogonal sein muß. Verschiedene Experimente auf einem Vektorcomputer werden beschrieben, welche die Resultate der Fehleranalyse bestätigen.


Numerische Mathematik | 1989

Rehabilitation of the Gauss-Jordan algorithm

T. J. Dekker; Walter Hoffmann

SummaryIn this paper a Gauss-Jordan algorithm with column interchanges is presented and analysed. We show that, in contrast with Gaussian elimination, the Gauss-Jordan algorithm has essentially differing properties when using column interchanges instead of row interchanges for improving the numerical stability. For solutions obtained by Gauss-Jordan with column interchanges, a more satisfactory bound for the residual norm can be given. The analysis gives theoretical evidence that the algorithm yields numerical solutions as good as those obtained by Gaussian elimination and that, in most practical situations, the residuals are equally small. This is confirmed by numerical experiments. Moreover, timing experiments on a Cyber 205 vector computer show that the algorithm presented has good vectorisation properties.


Journal of Computational and Applied Mathematics | 1994

Parallel algorithms for solving large linear systems

T. J. Dekker; Walter Hoffmann; K. Potma

The solution of linear systems continues to play an important role in scientific computing. The problems to be solved often are of very large size, so that solving them requires large computer resources. To solve these problems, at least supercomputers with large shared memory or massive parallel computer systems with distributed memory are needed. This paper gives a survey of research on parallel implementation of various direct methods to solve dense linear systems. In particular are considered: Gaussian elimination, Gauss-Jordan elimination and a variant due to Huard (19791, and an algorithm due to Enright (19781, designed in relation to solving (stiff) ODES, such that stepsize and other method parameters can easily be varied. Some theoretical results are mentioned, including a new result on error analysis of Huard’s algorithm. Moreover, practical considerations and results of experiments on supercomputers and on a distributed-memory computer system are presented.


Journal of Computational and Applied Mathematics | 1987

Solving linear systems on a vector computer

Walter Hoffmann

Abstract This paper gives a classification for the triangular factorization of square matrices. These factorizations are used for solving linear systems. Efficient algorithms for vector computers are presented on basis of criteria for optimal algorithms. Moreover, the Gauss—Jordan elimination algorithm in a version which admits efficient implementation on a vector computer is described. Comparative experiments in FORTRAN 77 with FORTRAN 200 extensions for the Cyber 205 are reported.


Computing | 1997

Stability of the Gauss-Huard algorithm with partial pivoting

T. J. Dekker; Walter Hoffmann; K. Potma

This paper considers elimination methods to solve dense linear systems, in particular a variant of Gaussian elimination due to Huard [13]. This variant reduces the system to an equivalent diagonal system just like Gauss-Jordan elimination, but does not require more floating-point operations than Gaussian elimination. To preserve stability, a pivoting strategy using column interchanges, proposed by Hoffmann [10], is incorporated in the original algorithm. An error analysis is given showing that Huard’s elimination method is as stable as Gauss-Jordan elimination with the appropriate pivoting strategy. This result is proven in a similar way as the proof of stability for Gauss-Jordan elimination given in [4]. Numerical experiments are reported which verify the theoretical error analysis of the Gauss-Huard algorithm.ZusammenfassungWir betrachten Eliminationsverfahren zur Lösung linearer Gleichungssysteme mit voll besetzter Koeffizientenmatriz und zwar besonders eine von Huard eingeführte Variante des Gauss’schen Algorithmus [13]. Dabei wird das Gleichungssystem auf Diagonalform reduziert wie in der Gauss-Jordan Variante des Eliminationsverfahrens, aber es werden nicht mehr Operationen benötigt als im klassischen Algorithmus von Gauss. Um die Stabilität zu garantieren, wird eine von Hoffmann entwickelte Pivotstrategie verwendet [10]. Eine Fehleranalyse zeigt, daß der Gauss-Huard Algorithmus kombiniert mit dieser Pivotstrategie genau so stabil ist wie der Gauss-Jordan Algorithmus. Der Beweis ist analog zum Beweis für die Stabilität des Gauss-Jordan Algorithmus in [4]. Numerische Resultate bestätigen die theoretisch gefundene Fehleranalyse.


ieee international conference on high performance computing data and analytics | 1994

Solving dense linear systems by Gauss-Huard's method on a distributed memory system

Walter Hoffmann; K. Potma; Gera Pronk

Abstract A variant of Gaussian elimination that is known as Gauss-Huards algorithm behaves like Gauss-Jordans algorithm in the fact that it also reduces the matrix to diagonal form and it behaves like LU factorisation in the fact that it uses the same number of floating-point operations and has practically the same numerical stability. This contribution presents a block-variant of Gauss-Huards algorithm with favourable data locality.


Linear Algebra and its Applications | 1998

The Gauss-Huard algorithm and LU factorization

Walter Hoffmann

Abstract In this paper we analyze the Gauss-Huard algorithm. From a description of the algorithm in terms of matrix-vector operations we reveal a close relation between the Gauss-Huard algorithm and an LU factorization as constructed in an ikj variant.


ieee international conference on high performance computing data and analytics | 1994

Boosting the performance of the linear algebra part in an ODE solver for shared memory systems

K. Potma; Walter Hoffmann

Abstract The purpose of this paper is to present an algorithm for solving stiff ordinary differential equations on a parallel system with shared memory. The algorithm we use is based on a parallel implicit Runge-Kutta method described by Van der Houwen and Sommeijer. To improve the performance, a matrix decomposition technique proposed by Enright, which is based on similarity transformations to Hessenberg form, is incorporated. A theoretical performance model of the algorithm is presented. Results are reported of our algorithm that was implemented in Fortran on a shared memory system.


Applied Numerical Mathematics | 1991

Implementing linear algebra algorithms on a Meiko Computing Surface

Walter Hoffmann; K. Potma

Abstract This paper reports some performance tests on a parallel computing system with distributive memory. A number of tests concerns speed of communication. Others concern performance of floating-point calculations; most of these tests concern the speed of a single processor. Two experiments are reported on the implementation of methods for solving a bidiagonal and a tridiagonal linear system respectively. The algorithms are of the divide-and-conquer type. An algorithm for matrix-vector multiplication on a square grid of processors is also reported.


Journal of Computational and Applied Mathematics | 1989

An estimate for the spectral norm of the inverse of a matrix with the Gauss—Jordan algorithm

Walter Hoffmann

Abstract In this paper an algorithm is presented for calculating an estimate for the spectral norm of the inverse of a matrix. This algorithm is to be used in combination with solving a linear system by means of the Gauss—Jordan algorithm. The norm of the inverse is needed for the condition number of that matrix. The algorithm exploits the effect the Gauss—Jordan elimination is equivalent with writing the matrix as a product of n elementary matrices. These elementary matrices are sequentially used to maximize (locally) the norm of a solution vector that matches a right-hand side vector under construction. In n steps this produces a satisfactory estimate. Our algorithm uses 5n2+O(n) extra floating-point multiplications for the calculation of the required estimate and is tested for a multitude of matrices on the CYBER 205 vector computer of the Academic Computer Centre, SARA, in Amsterdam.

Collaboration


Dive into the Walter Hoffmann's collaboration.

Top Co-Authors

Avatar

K. Potma

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

T. J. Dekker

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Gera Pronk

University of Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge