Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerard L. G. Sleijpen is active.

Publication


Featured researches published by Gerard L. G. Sleijpen.


SIAM Journal on Matrix Analysis and Applications | 1996

A Jacobi--Davidson Iteration Method for Linear EigenvalueProblems

Gerard L. G. Sleijpen; Henk A. Van der

In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobis approach, combined with Davidsons method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well.


SIAM Journal on Scientific Computing | 1998

Jacobi--Davidson Style QR and QZ Algorithms for the Reduction of Matrix Pencils

Diederik R. Fokkema; Gerard L. G. Sleijpen; Henk A. Van der

Recently the Jacobi--Davidson subspace iteration method has been introduced as a new powerful technique for solving a variety of eigenproblems. In this paper we will further exploit this method and enhance it with several techniques so that practical and accurate algorithms are obtained. We will present two algorithms, JDQZ for the generalized eigenproblem and JDQR for the standard eigenproblem, that are based on the iterative construction of a (generalized) partial Schur form. The algorithms are suitable for the efficient computation of several (even multiple) eigenvalues and the corresponding eigenvectors near a user-specified target value in the complex plane. An attractive property of our algorithms is that explicit inversion of operators is avoided, which makes them potentially attractive for very large sparse matrix problems. We will show how effective restarts can be incorporated in the Jacobi--Davidson methods, very similar to the implicit restart procedure for the Arnoldi process. Then we will discuss the use of preconditioning, and, finally, we will illustrate the behavior of our algorithms by a number of well-chosen numerical experiments.


Bit Numerical Mathematics | 1996

Jacobi-Davidson type methods for generalized eigenproblems and polynomial eigenproblems

Gerard L. G. Sleijpen; Albert Booten; Diederik R. Fokkema; van der Henk Vorst

In this paper we will show how the Jacobi-Davidson iterative method can be used to solve generalized eigenproblems. Similar ideas as for the standard eigenproblem are used, but the projections, that are required to reduce the given problem to a small manageable size, need more attention. We show that by proper choices for the projection operators quadratic convergence can be achieved. The advantage of our approach is that none of the involved operators needs to be inverted. It turns out that similar projections can be used for the iterative approximation of selected eigenvalues and eigenvectors of polynomial eigenvalue equations. This approach has already been used with great success for the solution of quadratic eigenproblems associated with acoustic problems.


Siam Review | 2000

A Jacobi--Davidson Iteration Method for Linear Eigenvalue Problems

Gerard L. G. Sleijpen; Henk A. Van der

In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobis approach, combined with Davidsons method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well.


SIAM Journal on Matrix Analysis and Applications | 2005

Inexact Krylov Subspace Methods for Linear Systems

Jasper van den Eshof; Gerard L. G. Sleijpen

There is a class of linear problems for which the computation of the matrix-vector product is very expensive since a time consuming method is necessary to approximate it with some prescribed relative precision. In this paper we investigate the impact of approximately computed matrix-vector products on the convergence and attainable accuracy of several Krylov subspace solvers. We will argue that the sensitivity towards perturbations is mainly determined by the underlying way the Krylov subspace is constructed and does not depend on the optimality properties of the particular method. The obtained insight is used to tune the precision of the matrix-vector product in every iteration step in such a way that an overall efficient process is obtained. Our analysis confirms the empirically found relaxation strategy of Bouras and Fraysse for the GMRES method proposed in [A Relaxation Strategy for Inexact Matrix-Vector Products for Krylov Methods, Technical Report TR/PA/00/15, CERFACS, France, 2000]. Furthermore, we give an improved version of a strategy for the conjugate gradient method of Bouras, Fraysse, and Giraud used in [A Relaxation Strategy for Inner-Outer Linear Solvers in Domain Decomposition Methods, Technical Report TR/PA/00/17, CERFACS, France, 2000].


Numerical Algorithms | 1994

BiCGstab(l) and other hybrid Bi-CG methods

Gerard L. G. Sleijpen; H.A. van der Vorst; Diederik R. Fokkema

It is well-known that Bi-CG can be adapted so that the operations withAT can be avoided, and hybrid methods can be constructed in which it is attempted to further improve the convergence behaviour. Examples of this are CGS, Bi-CGSTAB, and the more general BiCGstab(l) method. In this paper it is shown that BiCGstab(l) can be implemented in different ways. Each of the suggested approaches has its own advantages and disadvantages. Our implementations allow for combinations of Bi-CG with arbitrary polynomial methods. The choice for a specific implementation can also be made for reasons of numerical stability. This aspect receives much attention. Various effects have been illustrated by numerical examples.


Numerical Algorithms | 1995

Maintaining convergence properties of BiCGstab methods in finite precision arithmetic

Gerard L. G. Sleijpen; Henk A. van der Vorst

It is well-known that Bi-CG can be adapted so that hybrid methods with computational complexity almost similar to Bi-CG can be constructed, in which it is attempted to further improve the convergence behavior. In this paper we will study the class of BiCGstab methods.In many applications, the speed of convergence of these methods appears to be determined mainly by the incorporated Bi-CG process, and the problem is that the Bi-CG iteration coefficients have to be determined from the BiCGstab process. We will focus our attention to the accuracy of these Bi-CG coefficients, and how rounding errors may affect the speed of convergence of the BiCGstab methods. We will propose a strategy for a more stable determination of the Bi-CG iteration coefficients and by experiments we will show that this indeed may lead to faster convergence.


Computing | 1996

Reliable updated residuals in hybrid Bi-CG methods

Gerard L. G. Sleijpen; H.A. van der Vorst

Many iterative methods for solving linear equationsAx=b aim for accurate approximations tox, and they do so by updating residuals iteratively. In finite precision arithmetic, these computed residuals may be inaccurate, that is, they may differ significantly from the (true) residuals that correspond to the computed approximations. In this paper we will propose variants on Neumaiers strategy, originally proposed for CGS, and explain its success. In particular, we will propose a more restrictive strategy for accumulating groups of updates for updating the residual and the approximation, and we will show that this may improve the accuracy significantly, while maintaining speed of convergence. This approach avoids restarts and allows for more reliable stopping criteria. We will discuss updating conditions and strategies that are efficient, lead to accurate residuals, and are easy to implement. For CGS and Bi-CG these strategies are particularly attractive, but they may also be used to improve Bi-CGSTAB, BiCGstab(l), as well as other methods.ZusammenfassungViele iterative Methoden zur Lösung linearer Gleichungssysteme berechnen die Iterierten über aufdatierte Residuen. In endlicher Arithmetik können diese Residuen sehr ungenau sein, d.h., sie können sich erheblich von den tatsächlichen unterscheiden. In dieser Arbeit stellen wir Varianten der Neumaier Strategie vor, die ursprünglich für das CGS-Verfahren vorgeschlagen wurde, und erklären deren Erfolge. Insbesondere werden wir eine Variante vorschlagen, bei der mehrere Aufdatierungsschritte zusammengefaßt werden. Wir zeigen, daß sich die Genauigkeit der berechneten Residuen dadurch erheblich verbessern läßt, ohne daß die Konvergenzgeschwindigkeit beeinträchtigt wird. Dieser Ansatz vermeidet Neustarts und ermöglicht zuverlässigere Abbruchkriterien. Wir diskutieren Aufdatierungsbedingungen und Strategien, die effizient und leicht zu implementieren sind. Diese Strategien führen zu genaueren Residuen und sind insbesondere für CGS und Bi-CG-aber auch für Bi-CGSTAB, BiCGstab(l) und andere Verfahren-sehr attraktiv.


Journal of Computational and Applied Mathematics | 1996

Generalized conjugate gradient squared

Diederik R. Fokkema; Gerard L. G. Sleijpen; Henk A. van der Vorst

The Conjugate Gradient Squared (CGS) is an iterative method for solving nonsymmetric linear systems of equations. However, during the iteration large residual norms may appear, which may lead to inaccurate approximate solutions or may even deteriorate the convergence rate. Instead of squaring the Bi-CG polynomial as in CGS, we propose to consider products of two nearby Bi-CG polynomials which leads to generalized CGS methods, of which CGS is just a particular case. This approach allows the construction of methods that converge less irregularly than CGS and that improve on other convergence properties as well. Here, we are interested in a property that got less attention in literature: we concentrate on retaining the excellent approximation qualities of CGS with respect to components of the solution in the direction of eigenvectors associated with extreme eigenvalues. This property seems to be important in connection with Newtons scheme for nonlinear equations: our numerical experiments show that the number of Newton steps may decrease significantly when using a generalized CGS method as linear solver for the Newton correction equations.


Magnetic Resonance in Medicine | 2012

Fast design of local N-gram-specific absorption rate-optimized radiofrequency pulses for parallel transmit systems.

Alessandro Sbrizzi; Hans Hoogduin; Jan J.W. Lagendijk; Peter R. Luijten; Gerard L. G. Sleijpen; Cornelis A.T. van den Berg

Designing multidimensional radiofrequency pulses for clinical application must take into account the local specific absorption rate (SAR) as controlling the global SAR does not guarantee suppression of hot spots. The maximum peak SAR, averaged over an N grams cube (local NgSAR), must be kept under certain safety limits. Computing the SAR over a three‐dimensional domain can require several minutes and implementing this computation in a radiofrequency pulse design algorithm could slow down prohibitively the numerical process. In this article, a fast optimization algorithm is designed acting on a limited number of control points, which are strategically selected locations from the entire domain. The selection is performed by comparing the largest eigenvalues and the corresponding eigenvectors of the matrices which locally describe the tissues amount of heating. The computation complexity is dramatically reduced. An additional critical step to accelerate the computations is to apply a multi shift conjugate gradient algorithm. Two transmit array setups are studied: a two channel 3 T birdcage body coil and a 12‐channel 7 T transverse electromagnetic (TEM) head coil. In comparison with minimum power radiofrequency pulses, it is shown that a reduction of 36.5% and 35%, respectively, in the local NgSAR can be achieved within short, clinically feasible, computation times. Magn Reson Med, 2012.

Collaboration


Dive into the Gerard L. G. Sleijpen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin B. van Gijzen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhaojun Bai

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge