Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver G. Ernst is active.

Publication


Featured researches published by Oliver G. Ernst.


SIAM Journal on Scientific Computing | 2001

A Multigrid Method Enhanced by Krylov Subspace Iteration for Discrete Helmholtz Equations

Howard C. Elman; Oliver G. Ernst; Dianne P. O'Leary

Standard multigrid algorithms have proven ineffective for the solution of discretizations of Helmholtz equations. In this work we modify the standard algorithm by adding GMRES iterations at coarse levels and as an outer iteration. We demonstrate the algorithms effectiveness through theoretical analysis of a model problem and experimental results. In particular, we show that the combined use of GMRES as a smoother and outer iteration produces an algorithm whose performance depends relatively mildly on wave number and is robust for normalized wave numbers as large as 200. For fixed wave numbers, it displays grid-independent convergence rates and has costs proportional to the number of unknowns.


Archive | 2012

Why it is Difficult to Solve Helmholtz Problems with Classical Iterative Methods

Oliver G. Ernst; Martin J. Gander

In contrast to the positive definite Helmholtz equation, the deceivingly similar looking indefinite Helmholtz equation is difficult to solve using classical iterative methods. Simply using a Krylov method is much less effective, especially when the wave number in the Helmholtz operator becomes large, and also algebraic preconditioners such as incomplete LU factorizations do not remedy the situation. Even more powerful preconditioners such as classical domain decomposition and multigrid methods fail to lead to a convergent method, and often behave differently from their usual behavior for positive definite problems. For example increasing the overlap in a classical Schwarz method degrades its performance, as does increasing the number of smoothing steps in multigrid. The purpose of this review paper is to explain why classical iterative methods fail to be effective for Helmholtz problems, and to show different avenues that have been taken to address this difficulty.


SIAM Journal on Numerical Analysis | 2006

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS

Michael Eiermann; Oliver G. Ernst

We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the reciprocal and the exponential functions. We further show that the restarted algorithm inherits the superlinear convergence property of its unrestarted counterpart for entire functions and present the results of numerical experiments.


Acta Numerica | 2001

Geometric aspects of the theory of Krylov subspace methods

Michael Eiermann; Oliver G. Ernst

The development of Krylov subspace methods for the solution of operator equations has shown that two basic construction principles underlie the most commonly used algorithms: the orthogonal residual (OR) and minimal residual (MR) approaches. It is shown that these can both be formulated as techniques for solving an approximation problem on a sequence of nested subspaces of a Hilbert space, an abstract problem not necessarily related to an operator equation. Essentially all Krylov subspace algorithms result when these subspaces form a Krylov sequence. The well-known relations among the iterates and residuals of MR/OR pairs are shown to hold also in this rather general setting. We further show that a common error analysis for these methods involving the canonical angles between subspaces allows many of the known residual and error bounds to be derived in a simple and consistent manner. An application of this analysis to compact perturbations of the identity shows that MR/OR pairs of Krylov subspace methods converge q-superlinearly when applied to such operator equations.


Journal of Computational and Applied Mathematics | 2000

Analysis of acceleration strategies for restarted minimal residual methods

Michael Eiermann; Oliver G. Ernst; Olaf Schneider

We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.


SIAM Journal on Matrix Analysis and Applications | 2010

Stochastic Galerkin Matrices

Oliver G. Ernst; Elisabeth Ullmann

We investigate the structural, spectral, and sparsity properties of Stochastic Galerkin matrices as they arise in the discretization of linear differential equations with random coefficient functions. These matrices are characterized as the Galerkin representation of polynomial multiplication operators. In particular, it is shown that the global Galerkin matrix associated with complete polynomials cannot be diagonalized in the stochastically linear case.


SIAM Journal on Scientific Computing | 2008

Efficient Solvers for a Linear Stochastic Galerkin Mixed Formulation of Diffusion Problems with Random Data

Oliver G. Ernst; Catherine E. Powell; David J. Silvester; Elisabeth Ullmann

We introduce a stochastic Galerkin mixed formulation of the steady-state diffusion equation and focus on the efficient iterative solution of the saddle-point systems obtained by combining standard finite element discretizations with two distinct types of stochastic basis functions. So-called mean-based preconditioners, based on fast solvers for scalar diffusion problems, are introduced for use with the minimum residual method. We derive eigenvalue bounds for the preconditioned system matrices and report on the efficiency of the chosen preconditioning schemes with respect to all the discretization parameters.


SIAM Journal on Matrix Analysis and Applications | 2000

Residual-Minimizing Krylov Subspace Methods for Stabilized Discretizations of Convection-Diffusion Equations

Oliver G. Ernst

We discuss the behavior of the minimal residual method applied to stabilized discretizations of one- and two-dimensional model problems for the stationary convection-diffusion equation. In the one-dimensional case, it is shown that eigenvalue information for estimating the convergence rate of the minimal residual method is highly misleading due to the strong nonnormality of these operators for large grid Peclet numbers. It is also shown that the field of values is a more reliable tool for assessing the convergence rate. In the two-dimensional model problems considered, we observe two distinct phases in the convergence of the iterative method: the first determined by the field of values and the second by the spectrum. We conjecture that the first phase lasts as long as the longest streamline takes to traverse the grid with the flow.


SIAM Journal on Matrix Analysis and Applications | 2011

Deflated Restarting for Matrix Functions

Michael Eiermann; Oliver G. Ernst; Stefan Güttel

We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function of a large sparse matrix on a vector. Its effect is to ultimately deflate a specific invariant subspace of the matrix which most impedes the convergence of the restarted approximation process. An approximation to the subspace to be deflated is successively refined in the course of the underlying restarted Arnoldi process by extracting Ritz vectors and using those closest to the spectral region of interest as exact shifts. The approximation is constructed with the help of a generalization of Krylov decompositions to linearly dependent vectors. A description of the restarted process as a successive interpolation scheme at Ritz values is given in which the exact shifts are replaced with improved approximations of eigenvalues in each restart cycle. Numerical experiments demonstrate the efficacy of the approach.


SIAM Journal on Scientific Computing | 2012

Efficient Iterative Solvers for Stochastic Galerkin Discretizations of Log-Transformed Random Diffusion Problems

Elisabeth Ullmann; Howard C. Elman; Oliver G. Ernst

We consider the numerical solution of a steady-state diffusion problem where the diffusion coefficient is the exponent of a random field. The standard stochastic Galerkin formulation of this problem is computationally demanding because of the nonlinear structure of the uncertain component of it. We consider a reformulated version of this problem as a stochastic convection-diffusion problem with random convective velocity that depends linearly on a fixed number of independent truncated Gaussian random variables. The associated Galerkin matrix is nonsymmetric but sparse and allows for fast matrix-vector multiplications with optimal complexity. We construct and analyze two block-diagonal preconditioners for this Galerkin matrix for use with Krylov subspace methods such as the generalized minimal residual method. We test the efficiency of the proposed preconditioning approaches and compare the iterative solver performance for a model problem posed in both diffusion and convection-diffusion formulations.

Collaboration


Dive into the Oliver G. Ernst's collaboration.

Top Co-Authors

Avatar

Michael Eiermann

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Björn Sprungk

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Elisabeth Ullmann

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Ralph-Uwe Börner

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Klaus Spitzer

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Afanasjew

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Ingolf Busch

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Seidel

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge