Ilse C. F. Ipsen
North Carolina State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ilse C. F. Ipsen.
SIAM Journal on Scientific Computing | 2001
Ilse C. F. Ipsen
The preconditioners for indefinite matrices of KKT form in [M. F. Murphy, G. H. Golub, and A. J. Wathen, SIAM J. Sci. Comput., 21 (2000), pp. 1969--1972] are extended to general nonsymmetric matrices.
Siam Review | 1997
Ilse C. F. Ipsen
The purpose of this paper is two-fold: to analyze the behavior of inverse iteration for computing a single eigenvector of a complex square matrix and to review Jim Wilkinsons contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic. nIn the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough. nIn the case of non-normal matrices, we show that the iterates converge asymptotically to an invariant subspace. However, the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the non-normal part of the matrix is large compared to the eigenvalues of smallest magnitude. In this case computing an eigenvector with inverse iteration is exponentially ill conditioned (in exact arithmetic). nWe conclude that the behavior of the residuals in inverse iteration is governed by the departure of the matrix from normality rather than by the conditioning of a Jordan basis or the defectiveness of eigenvalues.
American Mathematical Monthly | 1998
Ilse C. F. Ipsen; Carl D. Meyer
1. INTRODUCTION. We explain why Krylov methods make sense, and why it is natural to represent a solution to a linear system as a member of a Krylov space. In particular we show that the solution to a nonsingular linear system Ax = b lies in a Krylov space whose dimension is the degree of the minimal polynomial of A. Therefore, if the minimal polynomial of A has low degree then the space in which a Krylov method searches for the solution can be small. In this case a Krylov method has the opportunity to converge fast. When the matrix is singular, however, Krylov methods can fail. Even if the linear system does have a solution, it may not lie in a Krylov space. In this case we describe a class of right-hand sides for which a solution lies in a Krylov space. As it happens, there is only a single solution that lies in a Krylov space, and it can be obtained from the Drazin inverse. Our discussion demonstrates that eigenvalues play a central role when it comes to ensuring existence and uniqueness of Krylov solutions; they are not merely an artifact of convergence analyses.
SIAM Journal on Numerical Analysis | 1995
Stanley C. Eisenstat; Ilse C. F. Ipsen
A technique is presented for deriving bounds on the relative change in the singular values of a real matrix (or the eigenvalues of a real symmetric matrix) due to a perturbation, as well as bounds on the angles between the unperturbed and perturbed singular vectors (or eigenvectors). The class of perturbations considered consists of all
SIAM Journal on Matrix Analysis and Applications | 1994
Shivkumar Chandrasekaran; Ilse C. F. Ipsen
delta B
Bit Numerical Mathematics | 1996
Stephen L. Campbell; Ilse C. F. Ipsen; C. T. Kelley; Carl D. Meyer
for which
Acta Numerica | 1998
Ilse C. F. Ipsen
B + delta B = D_L BD_R
SIAM Journal on Matrix Analysis and Applications | 2007
Ilse C. F. Ipsen; Teresa M. Selee
for some nonsingular matrices
Siam Journal on Scientific and Statistical Computing | 1983
Don Heller; Ilse C. F. Ipsen
D_L
SIAM Journal on Matrix Analysis and Applications | 1994
Ilse C. F. Ipsen; Carl D. Meyer
and