Stanley J. Reeves
Auburn University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stanley J. Reeves.
IEEE Transactions on Image Processing | 1992
Stanley J. Reeves; Russell M. Mersereau
The point spread function (PSF) of a blurred image is often unknown a priori; the blur must first be identified from the degraded image data before restoring the image. Generalized cross-validation (GCV) is introduced to address the blur identification problem. The GCV criterion identifies model parameters for the blur, the image, and the regularization parameter, providing all the information necessary to restore the image. Experiments are presented which show that GVC is capable of yielding good identification results. A comparison of the GCV criterion with maximum-likelihood (ML) estimation shows the GCV often outperforms ML in identifying the blur and image model parameters.
Optical Engineering | 1990
Stanley J. Reeves; Russell M. Mersereau
Regularization is an effective method for obtaining satisfactory solutions to image restoration problems. The application of regularization necessitates a choice of the regularization parameter as well as the stabilizing functional. For most problems of interest, the best choices are not known a priori. We present a method for obtaining optimal estimates of the regularization parameter and stabilizing functional directly from the degraded image data. The method of generalized cross-validation (GCV) is used to generate the estimates. Implementation of GCV requires the computation of the system eigenvalues. Certain assumptions are made regarding the structure of the degradation so that the GCV criterion can be implemented efficiently. Furthermore, the assumptions on the matrix structure allow the regularization operator eigenvalues to be expressed as simple parametric functions. By choosing an appropriate structure for the regularization operator, we use the GCV criterion to estimate optimal parameters of the regularization operator and thus the stabilizing functional. Experimental results are presented that show the ability of GCV to give extremely reliable estimates for the regularization parameter and operator. By allowing both the degree and the manner of smoothing to be determined from the data, GCV-based regularization yields solutions that would otherwise be unattainable without a priori information.
IEEE Transactions on Image Processing | 2006
Ruimin Pan; Stanley J. Reeves
The regularization of the least-squares criterion is an effective approach in image restoration to reduce noise amplification. To avoid the smoothing of edges, edge-preserving regularization using a Gaussian Markov random field (GMRF) model is often used to allow realistic edge modeling and provide stable maximum a posteriori (MAP) solutions. However, this approach is computationally demanding because the introduction of a non-Gaussian image prior makes the restoration problem shift-variant. In this case, a direct solution using fast Fourier transforms (FFTs) is not possible, even when the blurring is shift-invariant. We consider a class of edge-preserving GMRF functions that are convex and have nonquadratic regions that impose less smoothing on edges. We propose a decomposition-enabled edge-preserving image restoration algorithm for maximizing the likelihood function. By decomposing the problem into two subproblems, with one shift-invariant and the other shift-variant, our algorithm exploits the sparsity of edges to define an FFT-based iteration that requires few iterations and is guaranteed to converge to the MAP estimate
IEEE Transactions on Signal Processing | 1999
Stanley J. Reeves; Zhao Zhe
Some signal reconstruction problems allow for flexibility in the selection of observations and, hence, the signal formation equation. In such cases, we have the opportunity to determine the best combination of observations before acquiring the data. We present and analyze two classes of sequential algorithms to select observations-sequential backward selection (SBS) and sequential forward selection (SFS). Although both are suboptimal, they perform consistently well. We analyze the computational complexity of various forms of SBS and SFS and develop upper bounds on the sum of squared errors (SSE) of the solutions obtained by SBS and SFS.
IEEE Transactions on Circuits and Systems for Video Technology | 1993
Stanley J. Reeves; Steven L. Eddins
In the above-titled paper (see ibid., vol. 2, p. 91-5, Mar. 1992), R. Rosenholtz and A. Zakhor proposed an effective method for reducing blocking in transform coded images. Their method uses two projection operators and the theory of projection onto convex sets (POCS) to guarantee convergence of the iteration. One projection operator is defined from the known quantization levels used to code the transform coefficients. The other projection is based on the set of bandlimited images with a given cutoff frequency. The algorithm they actually implemented is only tenuously related to the theory of POCS. A different basis for justifying their algorithm is offered, one that provides an exact formal basis for establishing convergence and a more flexible theory for elucidating the possibilities of the algorithm. >
IEEE Transactions on Image Processing | 2005
Stanley J. Reeves
Fast Fourier transform (FFT)-based restorations are fast, but at the expense of assuming that the blurring and deblurring are based on circular convolution. Unfortunately, when the opposite sides of the image do not match up well in intensity, this assumption can create significant artifacts across the image. If the pixels outside the measured image window are modeled as unknown values in the restored image, boundary artifacts are avoided. However, this approach destroys the structure that makes the use of the FFT directly applicable, since the unknown image is no longer the same size as the measured image. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT. We propose a new restoration method for the unknown boundary approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from a modified FFT-based approach. The other restoration involves a set of unknowns whose number equals that of the unknown boundary values. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists.
IEEE Transactions on Signal Processing | 1995
Stanley J. Reeves; Larry P. Heck
In some signal reconstruction problems, the observation equations can be used as a priori information for selecting the best combination of observations before acquiring them. In the present correspondence, the authors define a selection criterion and propose efficient methods for optimizing the criterion with respect to the combination of observations. The examples illustrate the value of optimized sampling using the proposed methods. >
IEEE Transactions on Image Processing | 1994
Stanley J. Reeves
It has been shown that space-variant regularization in image restoration provides better results than space-invariant regularization. However, the optimal choice of the regularization parameter is usually unknown a priori. In previous work, the generalized cross-validation (GCV) criterion was shown to provide accurate estimates of the optimal regularization parameter. The author introduces a modified form of the GCV criterion that incorporates space-variant regularization and data error terms. Furthermore, he presents an efficient method for estimating the GCV criterion for the space-variant case using iterative image restoration techniques. This method performs nearly as well as the exact criterion for the image restoration problem. In addition, he proposes a Wiener filter interpretation for choosing the local weighting of the regularization. This interpretation suggests the use of a multistage estimation procedure to estimate the optimal choice of the local regularization weights. Experiments confirm the value of the modified GCV estimation criterion as well as the multistage procedure for estimating the local regularization weights.
IEEE Signal Processing Letters | 1999
Stanley J. Reeves
Recent work in sparse signal reconstruction has shown that the backward greedy algorithm can select the optimal subset of unknowns if the perturbation of the data is sufficiently small. We propose an efficient implementation of the backward greedy algorithm that yields a significant improvement in computational efficiency over the standard implementation. Furthermore, we propose an efficient algorithm for the case in which the transform matrix is too large to be stored. We analyze the computational complexity and compare the algorithms, and we illustrate the improved efficiency with examples.
IEEE Transactions on Medical Imaging | 2000
Yun Gao; Stanley J. Reeves
Magnetic resonance spectroscopic imaging requires a great deal of time to gather the data necessary to achieve satisfactory resolution. When the image has a limited region of support (ROS), it is possible to reconstruct the image from a subset of k-space samples. Therefore, the authors desire to choose the best possible combination of a small number of k-space samples to guarantee the quality of the reconstructed image. Sequential forward selection (SFS) is appealing as an optimization method because the previously selected sample can be observed while the next sample is selected. However, when the number of selected k-space samples is less than the number of unknowns at the beginning of the selection process, the optimality criterion is undefined and the resulting SFS algorithm cannot be used. Here, the authors present a modified form of the criterion that overcomes this problem and develop an SFS algorithm for the new criterion. Then the authors develop an efficient computational strategy for this algorithm as well as for the standard SFS algorithm. The combined algorithm efficiently selects a reduced set of k-space samples from which the ROS can be reconstructed with minimal noise amplification.