Stefan Güttel
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefan Güttel.
Siam Review | 2013
Pedro Gonnet; Stefan Güttel; Lloyd N. Trefethen
Pade approximation is considered from the point of view of robust methods of numerical linear algebra, in particular, the singular value decomposition. This leads to an algorithm for practical computation that bypasses most problems of solution of nearly-singular systems and spurious pole-zero pairs caused by rounding errors, for which a MATLAB code is provided. The success of this algorithm suggests that there might be variants of Pade approximation that are pointwise convergent as the degrees of the numerator and denominator increase to
SIAM Journal on Scientific Computing | 2013
Martin J. Gander; Stefan Güttel
\infty
SIAM Journal on Scientific Computing | 2014
Stefan Güttel; Roel Van Beeumen; Karl Meerbergen; Wim Michiels
, unlike traditional Pade approximants, which converge only in measure or capacity.
SIAM Journal on Scientific Computing | 2015
Stefan Güttel; Eric Polizzi; Ping Tak Peter Tang; Gautier Viaud
A novel parallel algorithm for the integration of linear initial-value problems is proposed. This algorithm is based on the simple observation that homogeneous problems can typically be integrated much faster than inhomogeneous problems. An overlapping time-domain decomposition is utilized to obtain decoupled inhomogeneous and homogeneous subproblems, and a near-optimal Krylov method is used for the fast exponential integration of the homogeneous subproblems. We present an error analysis and discuss the parallel scaling of our algorithm. The efficiency of this approach is demonstrated with numerical examples.
SIAM Journal on Matrix Analysis and Applications | 2011
Michael Eiermann; Oliver G. Ernst; Stefan Güttel
A new rational Krylov method for the efficient solution of nonlinear eigenvalue problems is proposed. This iterative method, called fully rational Krylov method for nonlinear eigenvalue problems (abbreviated as NLEIGS), is based on linear rational interpolation and generalizes the Newton rational Krylov method proposed in [R. Van Beeumen, K. Meerbergen, and W. Michiels, SIAM J. Sci. Comput., 35 (2013), pp. A327-A350]. NLEIGS utilizes a dynamically constructed rational interpolant of the nonlinear operator and a new companion-type linearization for obtaining a generalized eigenvalue problem with special structure. This structure is particularly suited for the rational Krylov method. A new approach for the computation of rational divided differences using matrix functions is presented. It is shown that NLEIGS has a computational cost comparable to the Newton rational Krylov method but converges more reliably, in particular, if the nonlinear operator has singularities nearby the target set. Moreover, NLEIGS implements an automatic scaling procedure which makes it work robustly independent of the location and shape of the target set, and it also features low-rank approximation techniques for increased computational efficiency. Small- and large-scale numerical examples are included.
SIAM Journal on Matrix Analysis and Applications | 2014
Andreas Frommer; Stefan Güttel; Marcel Schweitzer
The FEAST method for solving large sparse eigenproblems is equivalent to subspace iteration with an approximate spectral projector and implicit orthogonalization. This relation allows to characterize the convergence of this method in terms of the error of a certain rational approximant to an indicator function. We propose improved rational approximants leading to FEAST variants with faster convergence, in particular, when using rational approximants based on the work of Zolotarev. Numerical experiments demonstrate the possible computational savings especially for pencils whose eigenvalues are not well separated and when the dimension of the search space is only slightly larger than the number of wanted eigenvalues. The new approach improves both convergence robustness and load balancing when FEAST runs on multiple search intervals in parallel.
SIAM Journal on Matrix Analysis and Applications | 2015
Mario Berljafa; Stefan Güttel
We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function of a large sparse matrix on a vector. Its effect is to ultimately deflate a specific invariant subspace of the matrix which most impedes the convergence of the restarted approximation process. An approximation to the subspace to be deflated is successively refined in the course of the underlying restarted Arnoldi process by extracting Ritz vectors and using those closest to the spectral region of interest as exact shifts. The approximation is constructed with the help of a generalization of Krylov decompositions to linearly dependent vectors. A description of the restarted process as a successive interpolation scheme at Ritz values is given in which the exact shifts are replaced with improved approximations of eigenvalues in each restart cycle. Numerical experiments demonstrate the efficacy of the approach.
Siam Review | 2016
Vladimir Druskin; Stefan Güttel; Leonid Knizhnerman
When using the Arnoldi method for approximating
SIAM Journal on Matrix Analysis and Applications | 2014
Andreas Frommer; Stefan Güttel; Marcel Schweitzer
f(A){\mathbf b}
Acta Numerica | 2017
Stefan Güttel; Françoise Tisseur
, the action of a matrix function on a vector, the maximum number of iterations that can be performed is often limited by the storage requirements of the full Arnoldi basis. As a remedy, different restarting algorithms have been proposed in the literature, none of which has been universally applicable, efficient, and stable at the same time. We utilize an integral representation for the error of the iterates in the Arnoldi method which then allows us to develop an efficient quadrature-based restarting algorithm suitable for a large class of functions, including the so-called Stieltjes functions and the exponential function. Our method is applicable for functions of Hermitian and non-Hermitian matrices, requires no a priori spectral information, and runs with essentially constant computational work per restart cycle. We comment on the relation of this new restarting approach to other existing algorithms and illustrate its efficiency and numer...