Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert D. Nowak is active.

Publication


Featured researches published by Robert D. Nowak.


IEEE Journal of Selected Topics in Signal Processing | 2007

Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems

Mário A. T. Figueiredo; Robert D. Nowak; Stephen J. Wright

Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance.


IEEE Transactions on Signal Processing | 1998

Wavelet-based statistical signal processing using hidden Markov models

Matthew Crouse; Robert D. Nowak; Richard G. Baraniuk

Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMMs to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of wavelet-domain HMMs, we develop novel algorithms for signal denoising, classification, and detection.


IEEE Transactions on Signal Processing | 2009

Sparse Reconstruction by Separable Approximation

Stephen J. Wright; Robert D. Nowak; Mário A. T. Figueiredo

Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.


IEEE Transactions on Image Processing | 2003

An EM algorithm for wavelet-based image restoration

Mário A. T. Figueiredo; Robert D. Nowak

This paper introduces an expectation-maximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with low-complexity, expressed in the wavelet coefficients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated wavelet-based restoration but, except for certain special cases, the resulting criteria are solved approximately or require demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation offered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. Thus, it is a general-purpose approach to wavelet-based image restoration with computational complexity comparable to that of standard wavelet denoising schemes or of frequency domain deconvolution methods. The algorithm alternates between an E-step based on the fast Fourier transform (FFT) and a DWT-based M-step, resulting in an efficient iterative process requiring O(N log N) operations per iteration. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach performs competitively with, in some cases better than, the best existing methods in benchmark tests.


Proceedings of the IEEE | 2010

Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels

Waheed U. Bajwa; Jarvis D. Haupt; Akbar M. Sayeed; Robert D. Nowak

High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing (CCS), can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.


IEEE Signal Processing Magazine | 2008

Compressed Sensing for Networked Data

Jarvis D. Haupt; Waheed U. Bajwa; Michael G. Rabbat; Robert D. Nowak

This article describes a very different approach to the decentralized compression of networked data. Considering a particularly salient aspect of this struggle that revolves around large-scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems.


IEEE Transactions on Image Processing | 2007

Majorization–Minimization Algorithms for Wavelet-Based Image Restoration

Mário A. T. Figueiredo; José M. Bioucas-Dias; Robert D. Nowak

Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous ¿singularity issue¿ (SI) of ¿iteratively re weighted least squares¿ (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.


IEEE Transactions on Information Theory | 2010

Toeplitz Compressed Sensing Matrices With Applications to Sparse Channel Estimation

Jarvis D. Haupt; Waheed U. Bajwa; Gil M. Raz; Robert D. Nowak

Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zero-mean random variables, then with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured statistical dependencies. It follows that CS can be effectively utilized in linear, time-invariant system identification problems provided the impulse response of the system is (approximately or exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies. Abstract extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general statistical dependencies.


information processing in sensor networks | 2006

Compressive wireless sensing

Waheed U. Bajwa; Jarvis D. Haupt; Akbar M. Sayeed; Robert D. Nowak

Compressive sampling is an emerging theory that is based on the fact that a relatively small number of random projections of a signal can contain most of its salient information. In this paper, we introduce the concept of compressive wireless sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spatially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in information retrieval; and 2) the associated power-distortion trade-off. It is generally recognized that given sufficient prior knowledge about the sensed data (e.g., statistical characterization, homogeneity etc.), there exist schemes that have very favorable power-distortion-latency trade-offs. We propose a distributed matched source-channel communication scheme, based in part on recent results in compressive sampling theory, for estimation of sensed data at the fusion center and analyze, as a function of number of sensor nodes, the trade-offs between power, distortion and latency. Compressive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable power-distortion-latency trade-off) and we quantify this cost relative to the case when sufficient prior information about the sensed data is assumed


2007 IEEE/SP 14th Workshop on Statistical Signal Processing | 2007

Toeplitz-Structured Compressed Sensing Matrices

Waheed U. Bajwa; Jarvis D. Haupt; Gil M. Raz; Stephen J. Wright; Robert D. Nowak

The problem of recovering a sparse signal x Rn from a relatively small number of its observations of the form y = Ax Rk, where A is a known matrix and k « n, has recently received a lot of attention under the rubric of compressed sensing (CS) and has applications in many areas of signal processing such as data cmpression, image processing, dimensionality reduction, etc. Recent work has established that if A is a random matrix with entries drawn independently from certain probability distributions then exact recovery of x from these observations can be guaranteed with high probability. In this paper, we show that Toeplitz-structured matrices with entries drawn independently from the same distributions are also sufficient to recover x from y with high probability, and we compare the performance of such matrices with that of fully independent and identically distributed ones. The use of Toeplitz matrices in CS applications has several potential advantages: (i) they require the generation of only O(n) independent random variables; (ii) multiplication with Toeplitz matrices can be efficiently implemented using fast Fourier transform, resulting in faster acquisition and reconstruction algorithms; and (iii) Toeplitz-structured matrices arise naturally in certain application areas such as system identification.

Collaboration


Dive into the Robert D. Nowak's collaboration.

Top Co-Authors

Avatar

Rebecca Willett

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rui M. Castro

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Balzano

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge