Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beresford N. Parlett is active.

Publication


Featured researches published by Beresford N. Parlett.


SIAM Journal on Numerical Analysis | 1971

Direct Methods for Solving Symmetric Indefinite Systems of Linear Equations

James R. Bunch; Beresford N. Parlett

Methods for solving symmetric indefinite systems are surveyed including a new one which is stable and almost as fast as the Cholesky method.


Numerische Mathematik | 2005

On generalized successive overrelaxation methods for augmented linear systems

Zhong-Zhi Bai; Beresford N. Parlett; Zeng-Qi Wang

For the augmented system of linear equations, Golub, Wu and Yuan recently studied an SOR-like method (BIT 41(2001)71–85). By further accelerating it with another parameter, in this paper we present a generalized SOR (GSOR) method for the augmented linear system. We prove its convergence under suitable restrictions on the iteration parameters, and determine its optimal iteration parameters and the corresponding optimal convergence factor. Theoretical analyses show that the GSOR method has faster asymptotic convergence rate than the SOR-like method. Also numerical results show that the GSOR method is more effective than the SOR-like method when they are applied to solve the augmented linear system. This GSOR method is further generalized to obtain a framework of the relaxed splitting iterative methods for solving both symmetric and nonsymmetric augmented linear systems by using the techniques of vector extrapolation, matrix relaxation and inexact iteration. Besides, we also demonstrate a complete version about the convergence theory of the SOR-like method.


Numerical Linear Algebra With Applications | 1995

Approximate solutions and eigenvalue bounds from Krylov subspaces

Christopher C. Paige; Beresford N. Parlett; Henk A. van der Vorst

Approximations to the solution of a large sparse symmetric system of equations are considered. The conjugate gradient and minimum residual approximations are studied without reference to their computation. Several different bases for the associated Krylov subspace are used, including the usual Lanczos basis. The zeros of the iteration polynomial for the minimum residual approximation (harmonic Ritz values) are characterized in several ways and, in addition, attractive convergence properties are established. The connection of these harmonic Ritz values to Lehmanns optimal intervals for eigenvalues of the original matrix appears to be new.


Numerische Mathematik | 1994

Accurate Singular Values and Differential QD Algorithms

K. V. Fernando; Beresford N. Parlett

Summary.We have discovered a new implementation of the qd algorithm that has a far wider domain of stability than Rutishausers version. Our algorithm was developed from an examination of the {Cholesky~LR} transformation and can be adapted to parallel computation in stark contrast to traditional qd. Our algorithm also yields useful a posteriori upper and lower bounds on the smallest singular value of a bidiagonal matrix. The zero-shift bidiagonal QR of Demmel and Kahan computes the smallest singular values to maximal relative accuracy and the others to maximal absolute accuracy with little or no degradation in efficiency when compared with the LINPACK code. Our algorithm obtains maximal relative accuracy for all the singular values and runs at least four times faster than the LINPACK code.


Numerische Mathematik | 1969

Balancing a matrix for calculation of eigenvalues and eigenvectors

Beresford N. Parlett; C. Reinsch

This algorithm is based on the work of Osborne [1]. He pointed out that existing eigenvalue programs usually produce results with errors at least of order e‖A‖ E , where is the machine precision and ‖A‖ E is the Euclidean (Frobenius) norm of the given matrix A**. Hence he recommends that one precede the calling of such a routine by certain diagonal similarity transformations of A designed to reduce its norm.


Mathematics of Computation | 1974

The Rayleigh quotient iteration and some generalizations for nonnormal matrices

Beresford N. Parlett

The Rayleigh Quotient Iteration (RQI) was developed for real symmetric matrices. Its rapid local convergence is due to the stationarity of the Rayleigh Quotient at an eigenvector. Its excellent global properties are due to the monotonic decrease in the norms of the residuals. These facts are established for normal matrices. Both properties fail for nonnormal matrices and no generalization of the iteration has recaptured both of them. We examine methods which employ either one or the other of them. 1. History. In the 1870s John William Strutt (third Baron Rayleigh) began his mammoth work on the theory of sound. Basic to many of his studies were the small oscillations of a vibrating system about a position of equilibrium. In terms of suitable generalized coordinates, any set of small displacements (the state of the system) will be represented by a vector q with time derivative q. The potential energy V and the kinetic energy T at each instant t are represented by the two quadratic forms


SIAM Journal on Matrix Analysis and Applications | 2003

Orthogonal Eigenvectors and Relative Gaps

Inderjit S. Dhillon; Beresford N. Parlett

This paper presents and analyzes a new algorithm for computing eigenvectors of symmetric tridiagonal matrices factored as LDLt, with D diagonal and L unit bidiagonal. If an eigenpair is well behaved in a certain sense with respect to the factorization, the algorithm is shown to compute an approximate eigenvector which is accurate to working precision. As a consequence, all the eigenvectors computed by the algorithm come out numerically orthogonal to each other without making use of any reorthogonalization process. The key is first running a qd-type algorithm on the factored matrix LDLt and then applying a fine-tuned version of inverse iteration especially adapted to this situation. Since the computational cost is O(n) per eigenvector for an n × n matrix, the proposed algorithm is the central step of a more ambitious algorithm which, at best (i.e., when all eigenvectors are well-conditioned), would compute all eigenvectors of an n × n symmetric tridiagonal at O(n2) cost, a great improvement over existing algorithms.


SIAM Journal on Matrix Analysis and Applications | 1992

Reduction to tridiagonal form and minimal realizations

Beresford N. Parlett

This paper presents the theoretical background relevant to any method for producing a tridiagonal matrix similar to an arbitrary square matrix. Gragg’s work on factoring Hankel matrices and the Kalman–Gilbert structure theorem from systems theory both find a place in the development.Tridiagonalization is equivalent to the application of the generalized Gram–Schmidt process to a pair of Krylov sequences. In Euclidean space proper normalization allows one to monitor a tight lower bound on the condition number of the transformation. The various possibilities for breakdown find a natural classification by the ranks of certain matrices.The theory is illustrated by some small examples and some suggestions for restarting are evaluated


Linear Algebra and its Applications | 1980

A new look at the Lanczos algorithm for solving symmetric systems of linear equations

Beresford N. Parlett

Abstract Simple versions of the conjugate gradient algorithm and the Lanczos method are discussed, and some merits of the latter are described. A variant of Lanczos is proposed which maintains robust linear independence of the Lanczos vectors by keeping them in secondary storage and occasionally making use of them. The main applications are to problems in which (1) the cost of the matrix-vector product dominates other costs, (2) there is a sequence of right hand sides to be processed, and (3) the eigenvalue distribution of A is not too favorable.


Linear Algebra and its Applications | 1976

A recurrence among the elements of functions of triangular matrices

Beresford N. Parlett

Abstract A simple relation exists among the elements of φ(T) when φ is an analytic function and T is triangular. This permits the rapid build up of φ(T) from its diagonal. Moreover, exp(B−1A) can be formed without inverting B.

Collaboration


Dive into the Beresford N. Parlett's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Inderjit S. Dhillon

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Osni Marques

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Kahan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Demmel

University of California

View shared research outputs
Top Co-Authors

Avatar

Ming Gu

University of California

View shared research outputs
Top Co-Authors

Avatar

James R. Bunch

University of California

View shared research outputs
Top Co-Authors

Avatar

K. V. Fernando

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge