David K. Hammond
University of Oregon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David K. Hammond.
IEEE Transactions on Information Theory | 2011
Laurent Jacques; David K. Hammond; M. Jalal Fadili
In this paper, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed subject to a data-fidelity constraint expressed in the ℓp-norm of the residual error for 2 ≤ p ≤ ∞. We show theoretically that, (i) the reconstruction error of these new decoders is bounded if the sensing matrix satisfies an extended Restricted Isometry Property involving the Iρ norm, and (ii), for Gaussian random matrices and uniformly quantized measurements, BPDQp performance exceeds that of BPDN by dividing the reconstruction error due to quantization by √(p + 1). This last effect happens with high probability when the number of measurements exceeds a value growing with p, i.e., in an oversampled situation compared to what is commonly required by BPDN = BPDQ2. To demonstrate the theoretical power of BPDQp, we report numerical simulations on signal and image reconstruction problems.
IEEE Transactions on Image Processing | 2008
David K. Hammond; Eero P. Simoncelli
We develop a statistical model to describe the spatially varying behavior of local neighborhoods of coefficients in a multiscale image representation. Neighborhoods are modeled as samples of a multivariate Gaussian density that are modulated and rotated according to the values of two hidden random variables, thus allowing the model to adapt to the local amplitude and orientation of the signal. A third hidden variable selects between this oriented process and a nonoriented scale mixture of Gaussians process, thus providing adaptability to the local orientedness of the signal. Based on this model, we develop an optimal Bayesian least squares estimator for denoising images and show through simulations that the resulting method exhibits significant improvement over previously published results obtained with Gaussian scale mixtures.
Monthly Notices of the Royal Astronomical Society | 2009
David K. Hammond; Yves Wiaux; Pierre Vandergheynst
An algorithm is proposed for denoising the signal induced by cosmic strings in the cosmic microwave background. A Bayesian approach is taken, based on modelling the string signal in the wavelet domain with generalized Gaussian distributions. Good performance of the algorithm is demonstrated by simulated experiments at arcminute resolution under noise conditions including primary and secondary cosmic microwave background anisotropies, as well as instrumental noise.
IEEE Transactions on Information Theory | 2013
Laurent Jacques; David K. Hammond; M. Jalal Fadili
This paper addresses the problem of stably recovering sparse or compressible signals from compressed sensing measurements that have undergone optimal nonuniform scalar quantization, i.e., minimizing the common ℓ2-norm distortion. Generally, this quantized compressed sensing (QCS) problem is solved by minimizing the ℓ1-norm constrained by the ℓ2-norm distortion. In such cases, remeasurement and quantization of the reconstructed signal do not necessarily match the initial observations, showing that the whole QCS model is not consistent. Our approach considers instead that quantization distortion more closely resembles heteroscedastic uniform noise, with variance depending on the observed quantization bin. Generalizing our previous work on uniform quantization, we show that for nonuniform quantizers described by the “compander” formalism, quantization distortion may be better characterized as having bounded weighted ℓp-norm (p ≥ 2), for a particular weighting. We develop a new reconstruction approach, termed Generalized Basis Pursuit DeNoise (GBPDN), which minimizes the ℓ1-norm of the signal to reconstruct constrained by this weighted ℓp-norm fidelity. We prove that, for standard Gaussian sensing matrices and K sparse or compressible signals in RN with at least Ω((K logN/K)p/2) measurements, i.e., under strongly oversampled QCS scenario, GBPDN is ℓ2-ℓ1 instance optimal and stable recovers all such sparse or compressible signals. The reconstruction error decreases as O(2-B/√(p+1)) given a budget of B bits per measurement. This yields a reduction by a factor √(p+1) of the reconstruction error compared to the one produced by ℓ2-norm constrained decoders. We also propose an primal-dual proximal splitting scheme to solve the GBPDN program which is efficient for large-scale problems. Interestingly, extensive simulations testing the GBPDN effectiveness confirm the trend predicted by the theory, that the reconstruction error can indeed be reduced by increasing p, but this is achieved at a much less stringent oversampling regime than the one expected by the theoretical bounds. Besides the QCS scenario, we also show that GBPDN applies straightforwardly to the related case of CS measurements corrupted by heteroscedastic generalized Gaussian noise with provable reconstruction error reduction.
international conference on image processing | 2006
David K. Hammond; Eero P. Simoncelli
We develop a statistical model for images that explicitly captures variations in local orientation and contrast. Patches of wavelet coefficients are described as samples of a fixed Gaussian process that are rotated and scaled according to a set of hidden variables representing the local image contrast and orientation. An optimal Bayesian least squares estimator is developed by conditioning upon and integrating over the hidden orientation and scale variables. The resulting denoising procedure gives results that are visually superior to those obtained with a Gaussian scale mixture model that does not explicitly incorporate local image orientation.
IEEE Transactions on Medical Imaging | 2013
David K. Hammond; Benoit Scherrer; Simon K. Warfield
The electroencephalography source estimation problem consists of inferring cortical activation from measurements of electrical potential taken on the scalp surface. This inverse problem is intrinsically ill-posed. In particular the dimensionality of cortical sources greatly exceeds the number of electrode measurements, and source estimation requires regularization to obtain a unique solution. In this work, we introduce a novel regularization function called cortical graph smoothing, which exploits knowledge of anatomical connectivity available from diffusion-weighted imaging. Given a weighted graph description of the anatomical connectivity of the brain, cortical graph smoothing penalizes the weighted sum of squares of differences of cortical activity across the graph edges, thus encouraging solutions with consistent activation across anatomically connected regions. We explore the performance of the cortical graph smoothing source estimates for analysis of the event related potential for simple motor tasks, and compare against the commonly used minimum norm, weighted minimum norm, LORETA and sLORETA source estimation methods. Evaluated over a series of 18 subjects, the proposed cortical graph smoothing method shows superior localization accuracy compared to the minimum norm method, and greater relative peak intensity than the other comparison methods.
international conference on image processing | 2009
Laurent Jacques; David K. Hammond; Mohamed-Jalal Fadili
In this paper, following the Compressed Sensing (CS) paradigm, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed while enforcing a data fidelity term of bounded ℓp-norm, for 2 ≪ p ⩽ ∞. We show that in oversampled situations, i.e. when the number of measurements is higher than the minimal value required by CS, the performance of the BPDQp decoders outperforms that of BPDN, with reconstruction error due to quantization divided by. This reduction relies on a modified Restricted Isometry Property of the sensing matrix expressed in the ℓp-norm (RIPp); a property satisfied by Gaussian random matrices with high probability. We conclude with numerical experiments comparing BPDQp and BPDN for signal and image reconstruction problems.
ieee global conference on signal and information processing | 2013
David K. Hammond; Yaniv Gur; Christopher R. Johnson
We propose a novel difference metric, called the graph diffusion distance (GDD), for quantifying the difference between two weighted graphs with the same number of vertices. Our approach is based on measuring the average similarity of heat diffusion on each graph. We compute the graph Laplacian exponential kernel matrices, corresponding to repeatedly solving the heat diffusion problem with initial conditions localized to single vertices. The GDD is then given by the Frobenius norm of the difference of the kernels, at the diffusion time yielding the maximum difference. We study properties of the proposed distance on both synthetic examples, and on real-data graphs representing human anatomical brain connectivity.
international conference on acoustics, speech, and signal processing | 2012
David K. Hammond; Benoit Scherrer; Allen D. Malony
The source estimation problem for EEG consists of estimating cortical activity from measurements of electrical potential on the scalp surface. This is a underconstrained inverse problem as the dimensionality of cortical source currents far exceeds the number of sensors. We develop a novel regularization for this inverse problem which incorporates knowledge of the anatomical connectivity of the brain, measured by diffusion tensor imaging. We construct an overcomplete wavelet frame, termed cortical graph wavelets, by applying the recently developed spectral graph wavelet transform to this anatomical connectivity graph. Our signal model is formed by assuming that the desired cortical currents have a sparse representation in these cortical graph wavelets, which leads to a convex ℓ1-regularized least squares problem for the coefficients. On data from a simple motor potential experiment, the proposed method shows improvement over the standard minimum-norm regularization.
SIAM Journal on Matrix Analysis and Applications | 2016
J. J. P. Veerman; David K. Hammond
We describe the spectra of certain tridiagonal matrices arising from differential equations commonly used for modeling flocking behavior. In particular we consider systems resulting from allowing an arbitrary boundary condition for the end of a one-dimensional flock. We apply our results to demonstrate how asymptotic stability for consensus and flocking systems depends on the imposed boundary condition.