Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles L. Byrne is active.

Publication


Featured researches published by Charles L. Byrne.


Inverse Problems | 2004

A unified treatment of some iterative algorithms in signal processing and image reconstruction

Charles L. Byrne

Let T be a (possibly nonlinear) continuous operator on Hilbert space . If, for some starting vector x, the orbit sequence {Tkx,k = 0,1,...} converges, then the limit z is a fixed point of T; that is, Tz = z. An operator N on a Hilbert space is nonexpansive?(ne) if, for each x and y in , Even when N has fixed points the orbit sequence {Nkx} need not converge; consider the example N = ?I, where I denotes the identity operator. However, for any the iterative procedure defined by converges (weakly) to a fixed point of N whenever such points exist. This is the Krasnoselskii?Mann (KM) approach to finding fixed points of ne operators. A wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure, for particular choices of the ne operator N. These include the Gerchberg?Papoulis method for bandlimited extrapolation, the SART algorithm of Anderson and Kak, the Landweber and projected Landweber algorithms, simultaneous and sequential methods for solving the convex feasibility problem, the ART and Cimmino methods for solving linear systems of equations, the CQ algorithm for solving the split feasibility problem and Dolidzes procedure for the variational inequality problem for monotone operators.


Inverse Problems | 2002

Iterative oblique projection onto convex sets and the split feasibility problem

Charles L. Byrne

Let C and Q be nonempty closed convex sets in RN and RM, respectively, and A an M by N real matrix. The split feasibility problem (SFP) is to find x C with Ax Q, if such x exist. An iterative method for solving the SFP, called the CQ algorithm, has the following iterative step: xk+1 = P C (xk + γAT (P Q − I)Axk),where γ (0, 2L) with L the largest eigenvalue of the matrix ATA and PC and PQ denote the orthogonal projections onto C and Q, respectively; that is, PCx minimizes ||c − x||, over all c C. The CQ algorithm converges to a solution of the SFP, or, more generally, to a minimizer of ||PQAc − Ac|| over c in C, whenever such exist. The CQ algorithm involves only the orthogonal projections onto C and Q, which we shall assume are easily calculated, and involves no matrix inverses. If A is normalized so that each row has length one, then L does not exceed the maximum number of nonzero entries in any column of A, which provides a helpful estimate of L for sparse matrices. Particular cases of the CQ algorithm are the Landweber and projected Landweber methods for obtaining exact or approximate solutions of the linear equations Ax = b; the algebraic reconstruction technique of Gordon, Bender and Herman is a particular case of a block-iterative version of the CQ algorithm. One application of the CQ algorithm that is the subject of ongoing work is dynamic emission tomographic image reconstruction, in which the vector x is the concatenation of several images corresponding to successive discrete times. The matrix A and the set Q can then be selected to impose constraints on the behaviour over time of the intensities at fixed voxels, as well as to require consistency (or near consistency) with measured data.


IEEE Transactions on Medical Imaging | 1994

Noniterative compensation for the distance-dependent detector response and photon attenuation in SPECT imaging

Stephen J. Glick; Bill C. Penney; Michael A. King; Charles L. Byrne

A filtering approach is described, which accurately compensates for the 2D distance-dependent detector response, as well as for photon attenuation in a uniform attenuating medium. The filtering method is based on the frequency distance principle (FDP) which states that points in the object at a specific source-to-detector distance provide the most significant contribution to specified frequency regions in the discrete Fourier transform (DFT) of the sinogram. By modeling the detector point spread function as a 2D Gaussian function whose width is dependent on the source-to-detector distance, a spatially variant inverse filter can be computed and applied to the 3D DFT of the set of all sinogram slices. To minimize noise amplification the inverse filter is rolled off at high frequencies by using a previously published Wiener filter strategy. Attenuation compensation is performed with Bellinis method. It was observed that the tomographic point response, after distance-dependent filtering with the FDP, was approximately isotropic and varied substantially less with position than that obtained with other correction methods. Furthermore, it was shown that processing with this filtering technique provides reconstructions with minimal degradation in image fidelity.


IEEE Transactions on Information Theory | 1990

General entropy criteria for inverse problems, with applications to data compression, pattern classification, and cluster analysis

Lee K. Jones; Charles L. Byrne

Minimum distance approaches are considered for the reconstruction of a real function from finitely many linear functional values. An optimal class of distances satisfying an orthogonality condition analogous to that enjoyed by linear projections in Hilbert space is derived. These optimal distances are related to measures of distances between probability distributions recently introduced by C.R. Rao and T.K. Nayak (1985) and possess the geometric properties of cross entropy useful in speech and image compression, pattern classification, and cluster analysis. Several examples from spectrum estimation and image processing are discussed. >


IEEE Transactions on Medical Imaging | 2000

Noise characterization of block-iterative reconstruction algorithms. I. Theory

Edward J. Soares; Charles L. Byrne; Stephen J. Glick

Researchers have shown increasing interest in block-iterative image reconstruction algorithms due to the computational and modeling advantages they provide. Although their convergence properties have been well documented, little is known about how they behave in the presence of noise. In this work, the authors fully characterize the ensemble statistical properties of the rescaled block-iterative expectation-maximization (RBI-EM) reconstruction algorithm and the rescaled block-iterative simultaneous multiplicative algebraic reconstruction technique (RBI-SMART). Also included in the analysis are the special cases of RBI-EM, maximum-likelihood EM (ML-EM) and ordered-subset EM (OS-EM), and the special case of RBI-SMART, SMART. A theoretical formulation strategy similar to that previously outlined for ML-EM is followed for the RBI methods. The theoretical formulations in this paper rely on one approximation, namely, that the noise in the reconstructed image is small compared to the mean image. In a second paper, the approximation will be justified through Monte Carlo simulations covering a range of noise levels, iteration points, and subset orderings. The ensemble statistical parameters could then be used to evaluate objective measures of image quality.


IEEE Transactions on Medical Imaging | 2000

Guest Editorial Recent Developments in Iterative Image Reconstruction for PET and SPECT.

Richard M. Leahy; Charles L. Byrne

T HREE articles that begin this issue of TMI describe distinct regularized approaches to iterative image reconstruction from emission tomography data [24], [27], [39]. Their publication in this issue provides us with the opportunity to explain the background to this work and speculate on the future of such methods. Model-based iterative approaches to image reconstruction in PET and SPECT allow optimal noise handling [37] and accurate system response modeling [38], [34]. Research in model-based image reconstruction methods addresses two key issues: how to select a cost function that produces images with the desired properties and how to find these images quickly. In the first category we include work addressing statistical and physical models for the data, selection of image smoothing terms or priors that regularize the solution and the choice of cost function to be optimized over the image space [30]. The second area addresses the issue of rapidly finding a solution once a cost function has been selected. In principle, the solutions to the concave maximization problems typically encountered in image reconstruction are independent of the numerical algorithm selected to find them. In practice however, fast algorithms are often terminated before convergence so that the solution becomes a function of the algorithm. Nevertheless, it is useful to maintain the distinction between classes of algorithms that compute, ostensibly, the same solution and those that optimize different cost criteria and, hence, result in different solutions. Here we are primarily concerned with the choice of iterative algorithm rather than issues relating to cost function selection. The early iterative algorithms for image reconstruction, which form the broad class of algebraic reconstruction techniques (ART’s), solve sets of simultaneous, possibly under-determined, linear equations [4], [17], [21]. While the ART methods have much in common with more recently developed statistically-based iterative methods, they do not themselves directly model noise in the data. Shepp and Vardi’s maximum likelihood (ML) algorithm, based on the EM (expectation maximization) methods of Dempster, Laird, and Rubin, was among the first to explicitly model the Poisson distribution of noise in photon limited imaging systems such as PET and SPECT [37]. The EM formalism for this problem gives rise to an elegant update equation reminiscent of the earlier multiplicative ART algorithms. The improvements in image quality the EMML produced inspired a tremendous amount of subsequent research. Much of this work has addressed the problem of speeding up EMML’s


Archive | 2007

Applied iterative methods

Charles L. Byrne

Applied Iterative Methods is a self-contained treatise suitable as both a reference and a graduate-level textbook in the area of iterative algorithms. It is the first book to combine subjects such as optimization, convex analysis, and approximation theory and organize them around a detailed and mathematically sound treatment of iterative algorithms. Such algorithms are used in solving problems in a diverse area of applications, most notably in medical imaging such as emission and transmission tomography and magnetic-resonance imaging, as well as in intensity-modulated radiation therapy. Other applications, which lie outside of medicine, are remote sensing and hyperspectral imaging. This book details a great number of different iterative algorithms that are universally applicable.


Annals of Operations Research | 2001

Proximity Function Minimization Using Multiple Bregman Projections, with Applications to Split Feasibility and Kullback–Leibler Distance Minimization

Charles L. Byrne; Yair Censor

Problems in signal detection and image recovery can sometimes be formulated as a convex feasibility problem (CFP) of finding a vector in the intersection of a finite family of closed convex sets. Algorithms for this purpose typically employ orthogonal or generalized projections onto the individual convex sets. The simultaneous multiprojection algorithm of Censor and Elfving for solving the CFP, in which different generalized projections may be used at the same time, has been shown to converge for the case of nonempty intersection; still open is the question of its convergence when the intersection of the closed convex sets is empty.Motivated by the geometric alternating minimization approach of Csiszár and Tusnády and the product space formulation of Pierra, we derive a new simultaneous multiprojection algorithm that employs generalized projections of Bregman to solve the convex feasibility problem or, in the inconsistent case, to minimize a proximity function that measures the average distance from a point to all convex sets. We assume that the Bregman distances involved are jointly convex, so that the proximity function itself is convex. When the intersection of the convex sets is empty, but the closure of the proximity function has a unique global minimizer, the sequence of iterates converges to this unique minimizer. Special cases of this algorithm include the “Expectation Maximization Maximum Likelihood” (EMML) method in emission tomography and a new convergence result for an algorithm that solves the split feasibility problem.


Journal of the Optical Society of America | 1983

Image restoration and resolution enhancement

Charles L. Byrne; Raymond M. Fitzgerald; Michael A. Fiddy; Trevor J. Hall; Angela M. Darling

The ill-posed problem of restoring object information from finitely many measurements of its spectrum can be solved by using the best approximation in Hilbert spaces appropriately designed to include a priori information about object extent and shape and noise statistics. The procedures that are derived are noniterative, the linear ones extending the minimum-energy band-limited extrapolation methods (and thus related to Gerchberg–Papoulis iteration) and the nonlinear ones generalizing Burg’s maximum-entropy reconstruction of nonnegative objects.


Physics in Medicine and Biology | 1998

Reducing the influence of the partial volume effect on SPECT activity quantitation with 3D modelling of spatial resolution in iterative reconstruction

P.H. Pretorius; Michael A. King; Tinsu Pan; Daniel J. de Vries; Stephen J. Glick; Charles L. Byrne

Quantitative parameters such as the maximum and total counts in a volume are influenced by the partial volume effect. The magnitude of this effect varies with the non-stationary and anisotropic spatial resolution in SPECT slices. The objective of this investigation was to determine whether iterative reconstruction which includes modelling of the three-dimensional (3D) spatial resolution of SPECT imaging can reduce the impact of the partial volume effect on the quantitation of activity compared with filtered backprojection (FBP) techniques which include low-pass, and linear restoration filtering using the frequency distance relationship (FDR). The iterative reconstruction algorithms investigated were maximum-likelihood expectation-maximization (MLEM), MLEM with ordered subset acceleration (ML-OS), and MLEM with acceleration by the rescaled-block-iterative technique (ML-RBI). The SIMIND Monte Carlo code was used to simulate small hot spherical objects in an elliptical cylinder with and without uniform background activity as imaged by a low-energy ultra-high-resolution (LEUHR) collimator. Centre count ratios (CCRs) and total count ratios (TCRs) were determined as the observed counts over true counts. CCRs were unstable while TCRs had a bias of approximately 10% for all iterative techniques. The variance in the TCRs for ML-OS and ML-RBI was clearly elevated over that of MLEM, with ML-RBI having the smaller elevation. TCRs obtained with FDR-Wiener filtering had a larger bias (approximately 30%) than any of the iterative reconstruction methods but near stationarity is also reached. Butterworth filtered results varied by 9.7% from the centre to the edge. The addition of background has an influence on the convergence rate and noise properties of iterative techniques.

Collaboration


Dive into the Charles L. Byrne's collaboration.

Top Co-Authors

Avatar

Michael A. King

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Michael A. Fiddy

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Glick

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raymond M. Fitzgerald

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tinsu Pan

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B.C. Penney

Worcester Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge