Brendt Wohlberg
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brendt Wohlberg.
IEEE Transactions on Image Processing | 2009
Paul Rodriguez; Brendt Wohlberg
Replacing the lscr2 data fidelity term of the standard total variation (TV) functional with an lscr1 data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this lscr1-TV functional have only recently begun to be developed, the fastest of which exploit graph representations, and are restricted to the denoising problem. We describe an alternative approach that minimizes a generalized TV functional, including both lscr2-TV and lscr1-TV as special cases, and is capable of solving more general inverse problems than denoising (e.g., deconvolution). This algorithm is competitive with the graph-based methods in the denoising case, and is the fastest algorithm of which we are aware for general inverse problems involving a nontrivial forward linear operator.
IEEE Signal Processing Letters | 2007
Brendt Wohlberg; Paul Rodriguez
Total variation (TV) regularization has become a popular method for a wide variety of image restoration problems, including denoising and deconvolution. A number of authors have recently noted the advantages of replacing the standard lscr2data fidelity term with an lscr1 norm. We propose a simple but very flexible method for solving a generalized TV functional that includes both the lscr2 -TV and lscr1 -TV problems as special cases. This method offers competitive computational performance for lscr2 -TV and is comparable to or faster than any other lscr1 -TV algorithms of which we are aware.
international conference on acoustics, speech, and signal processing | 2013
Rick Chartrand; Brendt Wohlberg
We present an efficient algorithm for computing sparse representations whose nonzero coefficients can be divided into groups, few of which are nonzero. In addition to this group sparsity, we further impose that the nonzero groups themselves be sparse. We use a nonconvex optimization approach for this purpose, and use an efficient ADMM algorithm to solve the nonconvex problem. The efficiency comes from using a novel shrinkage operator, one that minimizes nonconvex penalty functions for enforcing sparsity and group sparsity simultaneously. Our numerical experiments show that combining sparsity and group sparsity improves signal reconstruction accuracy compared with either property alone. We also find that using nonconvex optimization significantly improves results in comparison with convex optimization.
ieee global conference on signal and information processing | 2013
Singanallur Venkatakrishnan; Charles A. Bouman; Brendt Wohlberg
Model-based reconstruction is a powerful framework for solving a variety of inverse problems in imaging. In recent years, enormous progress has been made in the problem of denoising, a special case of an inverse problem where the forward model is an identity operator. Similarly, great progress has been made in improving model-based inversion when the forward model corresponds to complex physical measurements in applications such as X-ray CT, electron-microscopy, MRI, and ultrasound, to name just a few. However, combining state-of-the-art denoising algorithms (i.e., prior models) with state-of-the-art inversion methods (i.e., forward models) has been a challenge for many reasons. In this paper, we propose a flexible framework that allows state-of-the-art forward models of imaging systems to be matched with state-of-the-art priors or denoising models. This framework, which we term as Plug-and-Play priors, has the advantage that it dramatically simplifies software integration, and moreover, it allows state-of-the-art denoising methods that have no known formulation as an optimization problem to be used. We demonstrate with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions.
IEEE Transactions on Geoscience and Remote Sensing | 2006
Brendt Wohlberg; Daniel M. Tartakovsky; Alberto Guadagnini
A typical subsurface environment is heterogeneous, consists of multiple materials (geologic facies), and is often insufficiently characterized by data. The ability to delineate geologic facies and to estimate their properties from sparse data is essential for modeling physical and biochemical processes occurring in the subsurface. We demonstrate that the support vector machine is a viable and efficient tool for lithofacies delineation, and we compare it with a geostatistical approach. To illustrate our approach, and to demonstrate its advantages, we construct a synthetic porous medium consisting of two heterogeneous materials and then estimate boundaries between these materials from a few selected data points. Our analysis shows that the error in facies delineation by means of support vector machines decreases logarithmically with increasing sampling density. We also introduce and analyze the use of regression support vector machines to estimate the parameter values between points where the parameter is sampled.
IEEE Transactions on Signal Processing | 2003
Brendt Wohlberg
Certain sparse signal reconstruction problems have been shown to have unique solutions when the signal is known to have an exact sparse representation. This result is extended to provide bounds on the reconstruction error when the signal has been corrupted by noise or is not exactly sparse for some other reason. Uniqueness is found to be extremely unstable for a number of common dictionaries.
Signal Processing | 2010
Youzuo Lin; Brendt Wohlberg; Hongbin Guo
Total variation (TV) regularization is a popular method for solving a wide variety of inverse problems in image processing. In order to optimize the reconstructed image, it is important to choose a good regularization parameter. The unbiased predictive risk estimator (UPRE) has been shown to give a good estimate of this parameter for Tikhonov regularization. In this paper we propose an extension of the UPRE method to the TV problem. Since direct computation of the extended UPRE is impractical in the case of inverse problems such as deblurring, due to the large scale of the associated linear problem, we also propose a method which provides a good approximation of this large scale problem, while significantly reducing computational requirements.
Journal of Mathematical Imaging and Vision | 2016
Paul Rodriguez; Brendt Wohlberg
Video background modeling is an important preprocessing step in many video analysis systems. Principal component pursuit (PCP), which is currently considered to be the state-of-the-art method for this problem, has a high computational cost, and processes a large number of video frames at a time, resulting in high memory usage and constraining the applicability of this method to streaming video. In this paper, we propose a novel fully incremental PCP algorithm for video background modeling. It processes one frame at a time, obtaining similar results to standard batch PCP algorithms, while being able to adapt to changes in the background. It has an extremely low memory footprint, and a computational complexity that allows real-time processing.
IEEE Transactions on Image Processing | 2016
Brendt Wohlberg
When applying sparse representation techniques to images, the standard approach is to independently compute the representations for a set of overlapping image patches. This method performs very well in a variety of applications, but results in a representation that is multi-valued and not optimized with respect to the entire image. An alternative representation structure is provided by a convolutional sparse representation, in which a sparse representation of an entire image is computed by replacing the linear combination of a set of dictionary vectors by the sum of a set of convolutions with dictionary filters. The resulting representation is both single-valued and jointly optimized over the entire image. While this form of a sparse representation has been applied to a variety of problems in signal and image processing and computer vision, the computational expense of the corresponding optimization problems has restricted application to relatively small signals and images. This paper presents new, efficient algorithms that substantially improve on the performance of other recent methods, contributing to the development of this type of representation as a practical tool for a wider range of problems.
IEEE Geoscience and Remote Sensing Letters | 2010
James Theiler; Clint Scovel; Brendt Wohlberg; Bernard R. Foy
We derive a class of algorithms for detecting anomalous changes in hyperspectral image pairs by modeling the data with elliptically contoured (EC) distributions. These algorithms are generalizations of well-known detectors that are obtained when the EC function is Gaussian. The performance of these EC-based anomalous change detectors is assessed on real data using both real and simulated changes. In these experiments, the EC-based detectors substantially outperform their Gaussian counterparts.