Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tommy Elfving is active.

Publication


Featured researches published by Tommy Elfving.


Numerical Algorithms | 1994

A multiprojection algorithm using Bregman projections in a product space

Yair Censor; Tommy Elfving

Generalized distances give rise to generalized projections into convex sets. An important question is whether or not one can use within the same projection algorithm different types of such generalized projections. This question has practical consequences in the area of signal detection and image recovery in situations that can be formulated mathematically as a convex feasibility problem. Using an extension of Pierras product space formalism, we show here that a multiprojection algorithm converges. Our algorithm is fully simultaneous, i.e., it uses in each iterative stepall sets of the convex feasibility problem. Different multiprojection algorithms can be derived from our algorithmic scheme by a judicious choice of the Bregman functions which govern the process. As a by-product of our investigation we also obtain blockiterative schemes for certain kinds of linearly constraned optimization problems.


Inverse Problems | 2005

The multiple-sets split feasibility problem and its applications for inverse problems

Yair Censor; Tommy Elfving; Nirit Kopf; Thomas Bortfeld

The multiple-sets split feasibility problem requires finding a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operators range. It generalizes the convex feasibility problem as well as the two-sets split feasibility problem. We propose a projection algorithm that minimizes a proximity function that measures the distance of a point from all sets. The formulation, as well as the algorithm, generalize earlier work on the split feasibility problem. We offer also a generalization to proximity functions with Bregman distances. Application of the method to the inverse problem of intensity-modulated radiation therapy treatment planning is studied in a separate companion paper and is here only described briefly.


Bit Numerical Mathematics | 1979

ACCELERATED PROJECTION METHODS FOR COMPUTING PSEUDOINVERSE SOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS

Åke Björck; Tommy Elfving

Iterative methods are developed for computing the Moore-Penrose pseudoinverse solution of a linear systemAx=b, whereA is anm ×n sparse matrix. The methods do not require the explicit formation ofATA orAAT and therefore are advantageous to use when these matrices are much less sparse thanA itself. The methods are based on solving the two related systems (i)x=ATy,AATy=b, and (ii)ATAx=ATb. First it is shown how theSOR-andSSOR-methods for these two systems can be implemented efficiently. Further, the acceleration of theSSOR-method by Chebyshev semi-iteration and the conjugate gradient method is discussed. In particular it is shown that theSSOR-cg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns ofA respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction.


SIAM Journal on Matrix Analysis and Applications | 2002

Block-Iterative Algorithms with Diagonally Scaled Oblique Projections for the Linear Feasibility Problem

Yair Censor; Tommy Elfving

We formulate a block-iterative algorithmic scheme for the solution of systems of linear inequalities and/or equations and analyze its convergence. This study provides as special cases proofs of convergence of (i) the recently proposed component averaging (CAV) method of Censor, Gordon, and Gordon [Parallel Comput., 27 (2001), pp. 777--808], (ii) the recently proposed block-iterative CAV (BICAV) method of the same authors [IEEE Trans. Medical Imaging, 20 (2001), pp. 1050--1060], and (iii) the simultaneous algebraic reconstruction technique (SART) of Andersen and Kak [ Ultrasonic Imaging, 6 (1984), pp. 81--94] and generalizes them to linear inequalities. The first two algorithms are projection algorithms which use certain generalized oblique projections and diagonal weighting matrices which reflect the sparsity of the underlying matrix of the linear system. The previously reported experimental acceleration of the initial behavior of CAV and BICAV is thus complemented here by a mathematical study of the convergence of the algorithms.


Numerische Mathematik | 1980

Block-iterative methods for consistent and inconsistent linear equations

Tommy Elfving

SummaryWe shall in this paper consider the problem of computing a generalized solution of a given linear system of equations. The matrix will be partitioned by blocks of rows or blocks of columns. The generalized inverses of the blocks are then used as data to Jacobi- and SOR-types of iterative schemes. It is shown that the methods based on partitioning by rows converge towards the minimum norm solution of a consistent linear system. The column methods converge towards a least squares solution of a given system. For the case with two blocks explicit expressions for the optimal values of the iteration parameters are obtained. Finally an application is given to the linear system that arises from reconstruction of a two-dimensional object by its one-dimensional projections.


SIAM Journal on Scientific Computing | 2007

On Diagonally Relaxed Orthogonal Projection Methods

Yair Censor; Tommy Elfving; Gabor T. Herman; Touraj Nikazad

We propose and study a block-iterative projection method for solving linear equations and/or inequalities. The method allows diagonal componentwise relaxation in conjunction with orthogonal projections onto the individual hyperplanes of the system, and is thus called diagonally relaxed orthogonal projections (DROP). Diagonal relaxation has proven useful in accelerating the initial convergence of simultaneous and block-iterative projection algorithms, but until now it was available only in conjunction with generalized oblique projections in which there is a special relation between the weighting and the oblique projections. DROP has been used by practitioners, and in this paper a contribution to its convergence theory is provided. The mathematical analysis is complemented by some experiments in image reconstruction from projections which illustrate the performance of DROP.


Studies in Computational Mathematics | 2001

Averaging Strings of Sequential Iterations for Convex Feasibility Problems

Yair Censor; Tommy Elfving; Gabor T. Herman

An algorithmic scheme for the solution of convex feasibility problems is proposed in which the end-points of strings of sequential projections onto the constraints are averaged. The scheme employing Bregman projections, is analyzed with the aid of an extended product space formalism. For the case of orthogonal projections we give also a relaxed version. Along with the well-known purely sequential fully simultaneous cases the new scheme includes many other inherently parallel algorithmic options epending on the choice of strings. Convergence in the consistent case is proven an application to optimization over linear inequalities is given.


Linear Algebra and its Applications | 1980

On some methods for entropy maximization and matrix scaling

Tommy Elfving

Abstract We describe and survey in this paper iterative algorithms for solving the discrete maximum entropy problem with linear equality constraints. This problem has applications e.g. in image reconstruction from projections, transportation planning, and matrix scaling. In particular we study local convergence and asymptotic rate of convergence as a function of the iteration parameter. For the trip distribution problem in transportation planning and the equivalent problem of scaling a positive matrix to achieve a priori given row and column sums, it is shown how the iteration parameters can be chosen in an optimal way. We also consider the related problem of finding a matrix X , diagonally similar to a given matrix, such that corresponding row and column norms in X are all equal. Reports of some numerical tests are given.


SIAM Journal on Matrix Analysis and Applications | 1998

Stability of Conjugate Gradient and Lanczos Methods for Linear Least Squares Problems

Åke Björck; Tommy Elfving; Zdenek Strakos

{The conjugate gradient method applied to the normal equations ATAx=ATb (CGLS) is often used for solving large sparse linear least squares problems. The mathematically equivalent algorithm LSQR based on the Lanczos bidiagonalization process is an often recommended alternative. In this paper, the achievable accuracy of different conjgate gradient and Lanczos methods in finite precision is studied. It is shown that an implementation of algorithm CGLS in which the residual sk=AT(b-Axk) of the normal equations is recurred will not in general achieve accurate solutions. The same conclusion holds for the method based on Lanczos bidiagonalization with starting vector ATb. For the preferred implementation of CGLS we bound the error ||r-rk|| of the computed residual rk. Numerical tests are given that confirm a conjecture of backward stability. The achievable accuracy of LSQR is shown to be similar. The analysis essentially also covers the preconditioned case.


Numerische Mathematik | 1987

An algorithm for computing constrained smoothing spline functions

Tommy Elfving; Lars-Erik Andersson

SummaryThe problem of computing constrained spline functions, both for ideal data and noisy data, is considered. Two types of constriints are treated, namely convexity and convexity together with monotonity. A characterization result for constrained smoothing splines is derived. Based on this result a Newton-type algorithm is defined for computing the constrained spline function. Thereby it is possible to apply the constraints over a whole interval rather than at a discrete set of points. Results from numerical experiments are included.

Collaboration


Dive into the Tommy Elfving's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Per Christian Hansen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Gabor T. Herman

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Iliev

Linköping University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge