Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca Willett is active.

Publication


Featured researches published by Rebecca Willett.


Applied Optics | 2008

Single disperser design for coded aperture snapshot spectral imaging

Ashwin A. Wagadarikar; Renu John; Rebecca Willett; David J. Brady

We present a single disperser spectral imager that exploits recent theoretical work in the area of compressed sensing to achieve snapshot spectral imaging. An experimental prototype is used to capture the spatiospectral information of a scene that consists of two balls illuminated by different light sources. An iterative algorithm is used to reconstruct the data cube. The average spectral resolution is 3.6 nm per spectral channel. The accuracy of the instrument is demonstrated by comparison of the spectra acquired with the proposed system with the spectra acquired by a nonimaging reference spectrometer.


Optics Express | 2007

Single-shot compressive spectral imaging with a dual-disperser architecture

Michael E. Gehm; Renu John; David J. Brady; Rebecca Willett; Timothy J. Schulz

This paper describes a single-shot spectral imaging approach based on the concept of compressive sensing. The primary features of the system design are two dispersive elements, arranged in opposition and surrounding a binary-valued aperture code. In contrast to thin-film approaches to spectral filtering, this structure results in easily-controllable, spatially-varying, spectral filter functions with narrow features. Measurement of the input scene through these filters is equivalent to projective measurement in the spectral domain, and hence can be treated with the compressive sensing frameworks recently developed by a number of groups. We present a reconstruction framework and demonstrate its application to experimental data.


IEEE Transactions on Medical Imaging | 2003

Platelets: a multiscale approach for recovering edges and surfaces in photon-limited medical imaging

Rebecca Willett; Robert D. Nowak

The nonparametric multiscale platelet algorithms presented in this paper, unlike traditional wavelet-based methods, are both well suited to photon-limited medical imaging applications involving Poisson data and capable of better approximating edge contours. This paper introduces platelets, localized functions at various scales, locations, and orientations that produce piecewise linear image approximations, and a new multiscale image decomposition based on these functions. Platelets are well suited for approximating images consisting of smooth regions separated by smooth boundaries. For smoothness measured in certain Holder classes, it is shown that the error of m-term platelet approximations can decay significantly faster than that of m-term approximations in terms of sinusoids, wavelets, or wedgelets. This suggests that platelets may outperform existing techniques for image denoising and reconstruction. Fast, platelet-based, maximum penalized likelihood methods for photon-limited image denoising, deblurring and tomographic reconstruction problems are developed. Because platelet decompositions of Poisson distributed images are tractable and computationally efficient, existing image reconstruction methods based on expectation-maximization type algorithms can be easily enhanced with platelet techniques. Experimental results suggest that platelet-based methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography.


IEEE Transactions on Image Processing | 2012

This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms—Theory and Practice

Zachary T. Harmany; Roummel F. Marcia; Rebecca Willett

Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.


Optical Engineering | 2011

Compressed sensing for practical optical imaging systems: a tutorial

Rebecca Willett; Roummel F. Marcia; Jonathan M. Nichols

The emerging field of compressed sensing (CS, also referred to as compressive sampling)1, 2 has potentially powerful implications for the design of optical imaging devices. In particular, compressed sensing theory suggests that one can recover a scene at a higher resolution than is dictated by the pitch of the focal plane array. This rather remarkable result comes with some important caveats however, especially when practical issues associated with physical implementation are taken into account. This tutorial discusses compressed sensing in the context of optical imaging devices, emphasizing the practical hurdles related to building such devices and offering suggestions for overcoming these hurdles. Examples and analysis specifically related to infrared imaging highlight the challenges associated with large format focal plane arrays and how these challenges can be mitigated using compressed sensing ideas.


IEEE Journal on Selected Areas in Communications | 2004

Estimating inhomogeneous fields using wireless sensor networks

Robert D. Nowak; Urbashi Mitra; Rebecca Willett

Sensor networks have emerged as a fundamentally new tool for monitoring spatial phenomena. This paper describes a theory and methodology for estimating inhomogeneous, two-dimensional fields using wireless sensor networks. Inhomogeneous fields are composed of two or more homogeneous (smoothly varying) regions separated by boundaries. The boundaries, which correspond to abrupt spatial changes in the field, are nonparametric one-dimensional curves. The sensors make noisy measurements of the field, and the goal is to obtain an accurate estimate of the field at some desired destination (typically remote from the sensor network). The presence of boundaries makes this problem especially challenging. There are two key questions: 1) Given n sensors, how accurately can the field be estimated? 2) How much energy will be consumed by the communications required to obtain an accurate estimate at the destination? Theoretical upper and lower bounds on the estimation error and energy consumption are given. A practical strategy for estimation and communication is presented. The strategy, based on a hierarchical data-handling and communication architecture, provides a near-optimal balance of accuracy and energy consumption.


IEEE Transactions on Signal Processing | 2010

Compressed Sensing Performance Bounds Under Poisson Noise

Maxim Raginsky; Rebecca Willett; Zachary T. Harmany; Roummel F. Marcia

This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is nonadditive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical l2 - l1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.


IEEE Transactions on Information Theory | 2007

Multiscale Poisson Intensity and Density Estimation

Rebecca Willett; Robert D. Nowak

The nonparametric Poisson intensity and density estimation methods studied in this paper offer near minimax convergence rates for broad classes of densities and intensities with arbitrary levels of smoothness. The methods and theory presented here share many of the desirable features associated with wavelet-based estimators: computational speed, spatial adaptivity, and the capability of detecting discontinuities and singularities with high resolution. Unlike traditional wavelet-based approaches, which impose an upper bound on the degree of smoothness to which they can adapt, the estimators studied here guarantee nonnegativity and do not require any a priori knowledge of the underlying signals smoothness to guarantee near-optimal performance. At the heart of these methods lie multiscale decompositions based on free-knot, free-degree piecewise-polynomial functions and penalized likelihood estimation. The degrees as well as the locations of the polynomial pieces can be adapted to the observed data, resulting in near-minimax optimal convergence rates. For piecewise-analytic signals, in particular, the error of this estimator converges at nearly the parametric rate. These methods can be further refined in two dimensions, and it is demonstrated that platelet-based estimators in two dimensions exhibit similar near-optimal error convergence rates for images consisting of smooth surfaces separated by smooth boundaries.


international conference on acoustics, speech, and signal processing | 2008

Compressive coded aperture superresolution image reconstruction

Roummel F. Marcia; Rebecca Willett

Recent work in the emerging field of compressive sensing indicates that, when feasible, judicious selection of the type of distortion induced by measurement systems may dramatically improve our ability to perform reconstruction. The basic idea of this theory is that when the signal of interest is very sparse (i.e., zero-valued at most locations) or compressible, relatively few incoherent observations are necessary to reconstruct the most significant non-zero signal components. However, applying this theory to practical imaging systems is challenging in the face of several measurement system constraints. This paper describes the design of coded aperture masks for super- resolution image reconstruction from a single, low-resolution, noisy observation image. Based upon recent theoretical work on Toeplitz- structured matrices for compressive sensing, the proposed masks are fast and memory-efficient to compute. Simulations demonstrate the effectiveness of these masks in several different settings.


IEEE Signal Processing Magazine | 2014

Sparsity and Structure in Hyperspectral Imaging : Sensing, Reconstruction, and Target Detection

Rebecca Willett; Marco F. Duarte; Mark A. Davenport; Richard G. Baraniuk

Hyperspectral imaging is a powerful technology for remotely inferring the material properties of the objects in a scene of interest. Hyperspectral images consist of spatial maps of light intensity variation across a large number of spectral bands or wavelengths; alternatively, they can be thought of as a measurement of the spectrum of light transmitted or reflected from each spatial location in a scene. Because chemical elements have unique spectral signatures, observing the spectra at a high spatial and spectral resolution provides information about the material properties of the scene with much more accuracy than is possible with conventional three-color images. As a result, hyperspectral imaging is used in a variety of important applications, including remote sensing, astronomical imaging, and fluorescence microscopy.

Collaboration


Dive into the Rebecca Willett's collaboration.

Top Co-Authors

Avatar

Robert D. Nowak

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert Oh

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge