Åke Björck
Linköping University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Åke Björck.
Bit Numerical Mathematics | 1967
Åke Björck
AbstractA general analysis of the condition of the linear least squares problem is given. The influence of rounding errors is studied in detail for a modified version of the Gram-Schmidt orthogonalization to obtain a factorizationA=QR of a givenm×n matrixA, whereR is upper triangular andQTQ=I. Letx be the vector which minimizes ‖b−Ax‖2 andr=b−Ax. It is shown that if inner-products are accumulated in double precision then the errors in the computedx andr are less than the errors resulting from some simultaneous initial perturbation δA, δb such that
Bit Numerical Mathematics | 1979
Åke Björck; Tommy Elfving
Linear Algebra and its Applications | 1994
Åke Björck
\parallel \delta A\parallel _E /\parallel A\parallel _E \approx \parallel \delta b\parallel _2 /\parallel b\parallel _2 \approx 2 \cdot n^{3/2} machine units.
Linear Algebra and its Applications | 1983
Åke Björck; Sven Hammarling
Bit Numerical Mathematics | 1967
Åke Björck
No reorthogonalization is needed and the result is independent of the pivoting strategy used.
Bit Numerical Mathematics | 1988
Åke Björck
Iterative methods are developed for computing the Moore-Penrose pseudoinverse solution of a linear systemAx=b, whereA is anm ×n sparse matrix. The methods do not require the explicit formation ofATA orAAT and therefore are advantageous to use when these matrices are much less sparse thanA itself. The methods are based on solving the two related systems (i)x=ATy,AATy=b, and (ii)ATAx=ATb. First it is shown how theSOR-andSSOR-methods for these two systems can be implemented efficiently. Further, the acceleration of theSSOR-method by Chebyshev semi-iteration and the conjugate gradient method is discussed. In particular it is shown that theSSOR-cg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns ofA respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction.
SIAM Journal on Matrix Analysis and Applications | 1992
Åke Björck; Christopher C. Paige
Abstract The Gram-Schmidt (GS) orthogonalization is one of the fundamental procedures in linear algebra. In matrix terms it is equivalent to the factorization AQ1R, where Q1∈Rm×n with orthonormal columns and R upper triangular. For the numerical GS factorization of a matrix A two different versions exist, usually called classical and modified Gram-Schmidt (CGS and MGS). Although mathematically equivalent, these have very different numerical properties. This paper surveys the numerical properties of CGS and MGS. A key observation is that MGS is numerically equivalent to Householder QR factorization of the matrix A augmented by an n×n zero matrix on top. This can be used to derive bounds on the loss of orthogonality in MGS, and to develop a backward-stable algorithm based on MGS. The use of reorthogonalization and iterated CGS and MGS algorithms are discussed. Finally, block versions of GS are described.
Acta Numerica | 2004
Åke Björck
Abstract A fast and stable method for computing the square root X of a given matrix A ( X 2 = A ) is developed. The method is based on the Schur factorization A = QSQ H and uses a fast recursion to compute the upper triangular square root of S . It is shown that if α = ∥ X ∥ 2 /∥ A ∥ is not large, then the computed square root is the exact square root of a matrix close to A . The method is extended for computing the cube root of A . Matrices exist for which the square root computed by the Schur method is ill conditioned, but which nonetheless have well-conditioned square roots. An optimization approach is suggested for computing the well-conditioned square roots in these cases.
Bit Numerical Mathematics | 1994
Åke Björck; E. Grimme; Paul Van Dooren
An iterative procedure is developed for reducing the rounding errors in the computed least squares solution to an overdetermined system of equationsAx =b, whereA is anm ×n matrix (m ≧n) of rankn. The method relies on computing accurate residuals to a certain augmented system of linear equations, by using double precision accumulation of inner products. To determine the corrections, two methods are given, based on a matrix decomposition ofA obtained either by orthogonal Householder transformations or by a modified Gram-Schmidt orthogonalization. It is shown that the rate of convergence in the iteration is independent of the right hand side,b, and depends linearly on the condition number, ℳ2135;(A), of the rectangular matrixA. The limiting accuracy achieved will be approximately the same as that obtained by a double precision factorization.In a second part of this paper the case whenx is subject to linear constraints and/orA has rank less thann is covered. Here also ALGOL-programs embodying the derived algorithms will be given.
Linear Algebra and its Applications | 1987
Åke Björck
An iterative method based on Lanczos bidiagonalization is developed for computing regularized solutions of large and sparse linear systems, which arise from discretizations of ill-posed problems in partial differential or integral equations. Determination of the regularization parameter and termination criteria are discussed. Comments are given on the computational implementation of the algorithm.