Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey A. Fessler is active.

Publication


Featured researches published by Jeffrey A. Fessler.


IEEE Signal Processing Magazine | 1997

Positron-emission tomography

John M. Ollinger; Jeffrey A. Fessler

We review positron-emission tomography (PET), which has inherent advantages that avoid the shortcomings of other nuclear medicine imaging methods. PET image reconstruction methods with origins in signal and image processing are discussed, including the potential problems of these methods. A summary of statistical image reconstruction methods, which can yield improved image quality, is also presented.


IEEE Transactions on Signal Processing | 1994

Space-alternating generalized expectation-maximization algorithm

Jeffrey A. Fessler; Alfred O. Hero

The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. The paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. The authors prove that the sequence of estimates monotonically increases the penalized-likelihood objective, derive asymptotic convergence rates, and provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, the SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms. >


IEEE Transactions on Signal Processing | 2003

Nonuniform fast Fourier transforms using min-max interpolation

Jeffrey A. Fessler; Bradley P. Sutton

The fast Fourier transform (FFT) is used widely in signal processing for efficient computation of the FT of finite-length signals over a set of uniformly spaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e., a nonuniform FT. Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the min-max approach provides substantially lower approximation errors than conventional interpolation methods. The min-max criterion is also useful for optimizing the parameters of interpolation kernels such as the Kaiser-Bessel function.


IEEE Transactions on Image Processing | 1996

Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs

Jeffrey A. Fessler; W.L. Rogers

This paper examines the spatial resolution properties of penalized-likelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce space-variant local impulse response functions, even for space-invariant tomographic systems. Paradoxically, for emission image reconstruction, the local resolution is generally poorest in high-count regions. We show that the linearized local impulse response induced by quadratic roughness penalties depends on the object only through its projections. This analysis leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution. The modified penalty also provides a very practical method for choosing the regularization parameter to obtain a specified resolution in images reconstructed by penalized-likelihood methods.


IEEE Transactions on Medical Imaging | 2002

Statistical image reconstruction for polyenergetic X-ray computed tomography

Idris A. Elbakri; Jeffrey A. Fessler

This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.


IEEE Transactions on Medical Imaging | 1999

Monotonic algorithms for transmission tomography

Hakan Erdogan; Jeffrey A. Fessler

We present a framework for designing fast and monotonic algorithms for transmission tomography penalized-likelihood image reconstruction. The new algorithms are based on paraboloidal surrogate functions for the log likelihood. Due to the form of the log-likelihood function it is possible to find low curvature surrogate functions that guarantee monotonicity. Unlike previous methods, the proposed surrogate functions lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences. The gradient and the curvature of the likelihood terms are evaluated only once per iteration. Since the problem is simplified at each iteration, the CPU time is less than that of current algorithms which directly minimize the objective, yet the convergence rate is comparable. The simplicity, monotonicity, and speed of the new algorithms are quite attractive. The convergence rates of the algorithms are demonstrated using real and simulated PET transmission scans.


IEEE Transactions on Image Processing | 1995

Globally convergent algorithms for maximum a posteriori transmission tomography

Kenneth Lange; Jeffrey A. Fessler

This paper reviews and compares three maximum likelihood algorithms for transmission tomography. One of these algorithms is the EM algorithm, one is based on a convexity argument devised by De Pierro (see IEEE Trans. Med. Imaging, vol.12, p.328-333, 1993) in the context of emission tomography, and one is an ad hoc gradient algorithm. The algorithms enjoy desirable local and global convergence properties and combine gracefully with Bayesian smoothing priors. Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm. This superiority stems from the larger number of exponentiations required by the EM algorithm. The convex and gradient algorithms are well adapted to parallel computing.


Circulation | 1996

Simultaneous Transmission/Emission Myocardial Perfusion Tomography: Diagnostic Accuracy of Attenuation-Corrected 99mTc-Sestamibi Single-Photon Emission Computed Tomography

Edward P. Ficaro; Jeffrey A. Fessler; Paul D. Shreve; J.N. Kritzman; Patricia A. Rose; James R. Corbett

BACKGROUND The purpose of the present study was to assess the diagnostic performance of attenuation-corrected (AC) stress 99mTc-sestamibi cardiac single-photon emission computed tomography (SPECT) for the identification of coronary heart disease (CHD). METHODS AND RESULTS With a triple-detector SPECT system with a 241Am transmission line source, simultaneous transmission/emission tomography (TCT/ECT) was performed on 60 patients with angiographic coronary disease and 59 patients with < or = 5% likelihood of CHD. Iteratively reconstructed AC stress 99mTc-sestamibi perfusion images were compared with uncorrected (NC) filtered-backprojection images. Normal database polar maps were constructed from AC and NC images for quantitative analyses. From the low-likelihood patients, the visual and quantitative normalcy rates increased from 0.88 and 0.76 for NC to 0.98 and 0.95 for AC (P < .05). For the detection of CHD, the receiver operating characteristic curves for the AC images demonstrated improved discrimination capacity (P < .05), and sensitivity/specificity values increased from 0.78/0.46 (NC) to 0.84/0.82 (AC) with visual analysis and from 0.84/0.46 (NC) to 0.88/0.82 (AC) with quantitative analysis. For localization of stenosed vessels, visual and quantitative sensitivity values were 0.51 and 0.63 for NC and 0.64 and 0.78 for AC images (P < .05), respectively. CONCLUSIONS TCT/ECT myocardial perfusion imaging significantly improves the diagnostic accuracy of cardiac SPECT for the detection and localization of CHD. Clinical use of TCT/ECT imaging deserves serious consideration.


Magnetic Resonance in Medicine | 2006

Spatial Domain Method for the Design of RF Pulses in Multicoil Parallel Excitation

William A. Grissom; Chun-Yu Yip; Zhenghui Zhang; V. Andrew Stenger; Jeffrey A. Fessler; Douglas C. Noll

Parallel excitation has been introduced as a means of accelerating multidimensional, spatially‐selective excitation using multiple transmit coils, each driven by a unique RF pulse. Previous approaches to RF pulse design in parallel excitation were either formulated in the frequency domain or restricted to echo‐planar trajectories, or both. This paper presents an approach that is formulated as a quadratic optimization problem in the spatial domain and allows the use of arbitrary k‐space trajectories. Compared to frequency domain approaches, the new design method has some important advantages. It allows for the specification of a region of interest (ROI), which improves excitation accuracy at high speedup factors. It allows for magnetic field inhomogeneity compensation during excitation. Regularization may be used to control integrated and peak pulse power. The effects of Bloch equation nonlinearity on the large‐tip‐angle excitation error of RF pulses designed with the method are investigated, and the utility of Tikhonov regularization in mitigating this error is demonstrated. Magn Reson Med, 2006.


IEEE Transactions on Medical Imaging | 2003

Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities

Bradley P. Sutton; Douglas C. Noll; Jeffrey A. Fessler

In magnetic resonance imaging, magnetic field inhomogeneities cause distortions in images that are reconstructed by conventional fast Fourier transform (FFT) methods. Several noniterative image reconstruction methods are used currently to compensate for field inhomogeneities, but these methods assume that the field map that characterizes the off-resonance frequencies is spatially smooth. Recently, iterative methods have been proposed that can circumvent this assumption and provide improved compensation for off-resonance effects. However, straightforward implementations of such iterative methods suffer from inconveniently long computation times. This paper describes a tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation. We use a min-max formulation to derive the temporal interpolator. Speedups of around 60 were achieved by combining this temporal interpolator with a nonuniform fast Fourier transform with normalized root mean squared approximation errors of 0.07%. The proposed method provides fast, accurate, field-corrected image reconstruction even when the field map is not smooth.

Collaboration


Dive into the Jeffrey A. Fessler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

W.L. Rogers

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Se Young Chun

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge