Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David L. Donoho is active.

Publication


Featured researches published by David L. Donoho.


IEEE Transactions on Information Theory | 1995

De-noising by soft-thresholding

David L. Donoho

Donoho and Johnstone (1994) proposed a method for reconstructing an unknown function f on [0,1] from noisy data d/sub i/=f(t/sub i/)+/spl sigma/z/sub i/, i=0, ..., n-1,t/sub i/=i/n, where the z/sub i/ are independent and identically distributed standard Gaussian random variables. The reconstruction f/spl circ/*/sub n/ is defined in the wavelet domain by translating all the empirical wavelet coefficients of d toward 0 by an amount /spl sigma//spl middot//spl radic/(2log (n)/n). The authors prove two results about this type of estimator. [Smooth]: with high probability f/spl circ/*/sub n/ is at least as smooth as f, in any of a wide variety of smoothness measures. [Adapt]: the estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. The present proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model. >


Magnetic Resonance in Medicine | 2007

Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging

Michael Lustig; David L. Donoho; John M. Pauly

The sparsity which is implicit in MR images is exploited to significantly undersample k‐space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finite‐differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed‐sensing, images with a sparse representation can be recovered from randomly undersampled k‐space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise‐like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo‐random variable‐density undersampling of phase‐encodes. The reconstruction is performed by minimizing the ℓ1 norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography. Magn Reson Med, 2007.


Journal of the American Statistical Association | 1995

Adapting to Unknown Smoothness via Wavelet Shrinkage

David L. Donoho; Iain M. Johnstone

Abstract We attempt to recover a function of unknown smoothness from noisy sampled data. We introduce a procedure, SureShrink, that suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: A threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein unbiased estimate of risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N · log(N) as a function of the sample size N. SureShrink is smoothness adaptive: If the unknown function contains jumps, then the reconstruction (essentially) does also; if the unknown function has a smooth piece, then the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness adaptive: It is near minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoot...


Proceedings of the National Academy of Sciences of the United States of America | 2003

Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization

David L. Donoho; Michael Elad

Given a dictionary D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the ℓ1 norm of the coefficients γ̱. In this article, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We sketch three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.


Multiscale Modeling & Simulation | 2006

Fast Discrete Curvelet Transforms

Emmanuel J. Candès; Laurent Demanet; David L. Donoho; Lexing Ying

This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n^2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.


Siam Review | 2009

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images

Alfred M. Bruckstein; David L. Donoho; Michael Elad

A full-rank matrix


Proceedings of the National Academy of Sciences of the United States of America | 2003

Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data

David L. Donoho; Carrie Grimes

{\bf A}\in \mathbb{R}^{n\times m}


IEEE Signal Processing Magazine | 2008

Compressed Sensing MRI

Michael Lustig; David L. Donoho; Juan M. Santos; John M. Pauly

with


Proceedings of the National Academy of Sciences of the United States of America | 2009

Message-passing algorithms for compressed sensing

David L. Donoho; Arian Maleki; Andrea Montanari

n<m


IEEE Transactions on Image Processing | 2005

Image decomposition via the combination of sparse representations and a variational approach

Jean-Luc Starck; Michael Elad; David L. Donoho

generates an underdetermined system of linear equations

Collaboration


Dive into the David L. Donoho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Luc Starck

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoming Huo

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Elad

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ana Georgina Flesia

National University of Cordoba

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge