Rick Chartrand
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rick Chartrand.
IEEE Signal Processing Letters | 2007
Rick Chartrand
Several authors have shown recently that It is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum lscr1 norm among those having the given measurements. We show that by replacing the lscr1 norm with the lscrp norm with p < 1, exact reconstruction is possible with substantially fewer measurements. We give a theorem in this direction, and many numerical examples, both in one complex dimension, and larger-scale examples in two real dimensions.
international conference on acoustics, speech, and signal processing | 2008
Rick Chartrand; Wotao Yin
The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using lscrp minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted least-squares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, [2]). Improvements are also observed for the reweighted-lscr1 approach of [3].
Inverse Problems | 2007
Rick Chartrand; Valentina Staneva
The recently emerged field known as compressive sensing has produced powerful results showing the ability to recover sparse signals from surprisingly few linear measurements, using l1 minimization. In previous work, numerical experiments showed that lp minimization with 0 < p < 1 recovers sparse signals from fewer linear measurements than does l1 minimization. It was also shown that a weaker restricted isometry property is sufficient to guarantee perfect recovery in the lp case. In this work, we generalize this result to an lp variant of the restricted isometry property, and then determine how many random, Gaussian measurements are sufficient for the condition to hold with high probability. The resulting sufficient condition is met by fewer measurements for smaller p. This adds to the theoretical justification for the methods already being applied to replacing high-dose CT scans with a small number of x-rays and reducing MRI scanning time. The potential benefits extend to any application of compressive sensing.
Journal of Mathematical Imaging and Vision | 2007
Triet M. Le; Rick Chartrand; Thomas J. Asaki
We propose a new variational model to denoise an image corrupted by Poisson noise. Like the ROF model described in [1] and [2], the new model uses total-variation regularization, which preserves edges. Unlike the ROF model, our model uses a data-fidelity term that is suitable for Poisson noise. The result is that the strength of the regularization is signal dependent, precisely like Poisson noise. Noise of varying scales will be removed by our model, while preserving low-contrast features in regions of low intensity.
international symposium on biomedical imaging | 2009
Rick Chartrand
Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.
international conference on acoustics, speech, and signal processing | 2008
Rayan Saab; Rick Chartrand; Ozgur Yilmaz
We present theoretical results pertaining to the ability of lscrp minimization to recover sparse and compressible signals from incomplete and noisy measurements. In particular, we extend the results of Candes, Romberg and Tao (2005) to the p < 1 case. Our results indicate that depending on the restricted isometry constants (see, e.g., Candes and Tao (2006; 2005)) and the noise level, lscrp minimization with certain values of p < 1 provides better theoretical guarantees in terms of stability and robustness than lscr1 minimization does. This is especially true when the restricted isometry constants are relatively large.
IEEE Transactions on Signal Processing | 2012
Rick Chartrand
We develop new nonconvex approaches for matrix optimization problems involving sparsity. The heart of the methods is a new, nonconvex penalty function that is designed for efficient minimization by means of a generalized shrinkage operation. We apply this approach to the decomposition of video into low rank and sparse components, which is able to separate moving objects from the stationary background better than in the convex case. In the case of noisy data, we add a nonconvex regularization, and apply a splitting approach to decompose the optimization problem into simple, parallelizable components. The nonconvex regularization ameliorates contrast loss, thereby allowing stronger denoising without losing more signal to the residual.
international conference on acoustics, speech, and signal processing | 2013
Rick Chartrand; Brendt Wohlberg
We present an efficient algorithm for computing sparse representations whose nonzero coefficients can be divided into groups, few of which are nonzero. In addition to this group sparsity, we further impose that the nonzero groups themselves be sparse. We use a nonconvex optimization approach for this purpose, and use an efficient ADMM algorithm to solve the nonconvex problem. The efficiency comes from using a novel shrinkage operator, one that minimizes nonconvex penalty functions for enforcing sparsity and group sparsity simultaneously. Our numerical experiments show that combining sparsity and group sparsity improves signal reconstruction accuracy compared with either property alone. We also find that using nonconvex optimization significantly improves results in comparison with convex optimization.
international conference on acoustics, speech, and signal processing | 2007
Rick Chartrand
The theory of compressed sensing has shown that sparse signals can be reconstructed exactly from remarkably few measurements. In this paper we consider a nonconvex extension, where the lscr11 norm of the basis pursuit algorithm is replaced with the lscrp norm, for p < 1. In the context of sparse error correction, we perform numerical experiments that show that for a fixed number of measurements, errors of larger support can be corrected in the nonconvex case. We also provide a theoretical justification for why this should be so.
ieee nuclear science symposium | 2007
Emil Y. Sidky; Rick Chartrand; Xiaochuan Pan
Image reconstruction for fan-beam computed tomography (CT) from projection data containing a small number of views is investigated. An iterative algorithm is developed that seeks to minimize the total p-variation of the reconstructed image subject to the constraint that the estimated projection data agree with the available data to within a specified data tolerance e. A preliminary investigation on the dependence of image quality as a function of p and e is performed.