Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörg Liesen is active.

Publication


Featured researches published by Jörg Liesen.


SIAM Journal on Scientific Computing | 2005

Block-Diagonal and Constraint Preconditioners for Nonsymmetric Indefinite Linear Systems. Part I: Theory

Eric de Sturler; Jörg Liesen

We study block-diagonal preconditioners and an efficient variant of constraint preconditioners for general two-by-two block linear systems with zero (2,2)-block. We derive block-diagonal preconditioners from a splitting of the (1,1)-block of the matrix. From the resulting preconditioned system we derive a smaller, so-called related system that yields the solution of the original problem. Solving the related system corresponds to an efficient implementation of constraint preconditioning. We analyze the properties of both classes of preconditioned matrices, in particular their spectra. Using analytical results, we show that the related system matrix has the more favorable spectrum, which in many applications translates into faster convergence for Krylov subspace methods. We show that fast convergence depends mainly on the quality of the splitting, a topic for which a substantial body of theory exists. Our analysis also provides a number of new relations between block-diagonal preconditioners and constraint preconditioners. For constrained problems, solving the related system produces iterates that satisfy the constraints exactly, just as for systems with a constraint preconditioner. Finally, for the Lagrange multiplier formulation of a constrained optimization problem we show how scaling nonlinear constraints can dramatically improve the convergence for linear systems in a Newton iteration. Our theoretical results are confirmed by numerical experiments on a constrained optimization problem. We consider the general, nonsymmetric, nonsingular case. Our only additional requirement is the nonsingularity of the Schur-complement--type matrix derived from the splitting that defines the preconditioners. In particular, the (1,2)-block need not equal the transposed (2,1)-block, and the (1,1)-block might be indefinite or even singular. This is the first paper in a two-part sequence. In the second paper we will study the use of our preconditioners in a variety of applications.


Archive | 2012

Krylov subspace methods : principles and analysis

Jörg Liesen; Zdenek Strakos

1. Introduction 2. Krylov subspace methods 3. Matching moments and model reduction view 4. Short recurrences for generating orthogonal Krylov subspace bases 5. Cost of computations using Krylov subspace methods


SIAM Journal on Matrix Analysis and Applications | 2013

A Framework for Deflated and Augmented Krylov Subspace Methods

André Gaul; Martin H. Gutknecht; Jörg Liesen; Reinhard Nabben

We consider deflation and augmentation techniques for accelerating the convergence of Krylov subspace methods for the solution of nonsingular linear algebraic systems. Despite some formal similarity, the two techniques are conceptually different from preconditioning. Deflation (in the sense the term is used here) “removes” certain parts from the operator making it singular, while augmentation adds a subspace to the Krylov subspace (often the one that is generated by the singular operator); in contrast, preconditioning changes the spectrum of the operator without making it singular. Deflation and augmentation have been used in a variety of methods and settings. Typically, deflation is combined with augmentation to compensate for the singularity of the operator, but both techniques can be applied separately. We introduce a framework of Krylov subspace methods that satisfy a Galerkin condition. It includes the families of orthogonal residual and minimal residual methods. We show that in this framework augmen...


SIAM Journal on Scientific Computing | 2001

Least Squares Residuals and Minimal Residual Methods

Jörg Liesen; Miroslav Rozlozník; Zdenek Strakos

We study Krylov subspace methods for solving unsymmetric linear algebraic systems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basic identities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571--581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than the Classical GMRES implementation due to Saad and Schultz [SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869].


Numerische Mathematik | 2008

On nonsymmetric saddle point matrices that allow conjugate gradient iterations

Jörg Liesen; Beresford N. Parlett

Linear systems in saddle point form are usually highly indefinite,which often slows down iterative solvers such as Krylov subspace methods. It has been noted by several authors that negating the second block row of a symmetric indefinite saddle point matrix leads to a nonsymmetric matrix


SIAM Journal on Scientific Computing | 2005

GMRES Convergence Analysis for a Convection-Diffusion Model Problem

Jörg Liesen; Zdenek Strakos


SIAM Journal on Matrix Analysis and Applications | 2005

Convergence of GMRES for Tridiagonal Toeplitz Matrices

Jörg Liesen; Zdenek Strakos

{{\mathcal A}}


Siam Review | 2008

On Optimal Short Recurrences for Generating Orthogonal Krylov Subspace Bases

Jörg Liesen; Zdenek Strakos


Numerische Mathematik | 2000

The conformal ‘bratwurst’ mapsand associated Faber polynomials

Tino Koch; Jörg Liesen

whose spectrum is entirely contained in the right half plane. In this paper we study conditions so that


SIAM Journal on Numerical Analysis | 2008

The Faber-Manteuffel Theorem for Linear Operators

Vance Faber; Jörg Liesen; Petr Tichý

Collaboration


Dive into the Jörg Liesen's collaboration.

Top Co-Authors

Avatar

Volker Mehrmann

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Olivier Sète

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Petr Tichý

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Zdenek Strakos

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Robert Luce

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Zdeněk Strakoš

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Reinhard Nabben

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vance Faber

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge