Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronald A. DeVore is active.

Publication


Featured researches published by Ronald A. DeVore.


IEEE Transactions on Information Theory | 1992

Image compression through wavelet transform coding

Ronald A. DeVore; Bjorn D. Jawerth; Bradley J. Lucier

A novel theory is introduced for analyzing image compression methods that are based on compression of wavelet decompositions. This theory precisely relates (a) the rate of decay in the error between the original image and the compressed image as the size of the compressed image representation increases (i.e., as the amount of compression decreases) to (b) the smoothness of the image in certain smoothness classes called Besov spaces. Within this theory, the error incurred by the quantization of wavelet transform coefficients is explained. Several compression algorithms based on piecewise constant approximations are analyzed in some detail. It is shown that, if pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression. Based on previous experimental research it is argued that in most instances the error incurred in image compression should be measured in the integral sense instead of the mean-square sense. >


Journal of the American Mathematical Society | 2008

Compressed sensing and best k-term approximation

Albert Cohen; Wolfgang Dahmen; Ronald A. DeVore

The typical paradigm for obtaining a compressed version of a discrete signal represented by a vector x ∈ R is to choose an appropriate basis, compute the coefficients of x in this basis, and then retain only the k largest of these with k < N . If we are interested in a bit stream representation, we also need in addition to quantize these k coefficients. Assuming, without loss of generality, that x already represents the coefficients of the signal in the appropriate basis, this means that we pick an approximation to x in the set Σk of k-sparse vectors (1.1) Σk := {x ∈ R : # supp(x) ≤ k}, where supp(x) is the support of x, i.e., the set of i for which xi = 0, and #A is the number of elements in the set A. The best performance that we can achieve by such an approximation process in some given norm ‖ · ‖X of interest is described by the best k-term approximation error


Journal of Complexity | 2007

Deterministic constructions of compressed sensing matrices

Ronald A. DeVore

Compressed sensing is a new area of signal processing. Its goal is to minimize the number of samples that need to be taken from a signal for faithful reconstruction. The performance of compressed sensing on signal classes is directly related to Gelfand widths. Similar to the deeper constructions of optimal subspaces in Gelfand widths, most sampling algorithms are based on randomization. However, for possible circuit implementation, it is important to understand what can be done with purely deterministic sampling. In this note, we show how to construct sampling matrices using finite fields. One such construction gives cyclic matrices which are interesting for circuit implementation. While the guaranteed performance of these deterministic constructions is not comparable to the random constructions, these matrices have the best known performance for purely deterministic constructions.


Advances in Computational Mathematics | 1996

Some remarks on greedy algorithms

Ronald A. DeVore; Vladimir N. Temlyakov

Estimates are given for the rate of approximation of a function by means of greedy algorithms. The estimates apply to approximation from an arbitrary dictionary of functions. Three greedy algorithms are discussed: the Pure Greedy Algorithm, an Orthogonal Greedy Algorithm, and a Relaxed Greedy Algorithm.


Mathematics of Computation | 2001

Adaptive wavelet methods for elliptic operator equations: convergence rates

Albert Cohen; Wolfgang Dahmen; Ronald A. DeVore

This paper is concerned with the construction and analysis of wavelet-based adaptive algorithms for the numerical solution of elliptic equations. These algorithms approximate the solution u of the equation by a linear combination of N wavelets, Therefore, a benchmark for their performance is provided by the rate of best approximation to u by an arbitrary linear combination of N wavelets (so called N-term approximation), which would be obtained by keeping the N largest wavelet coefficients of the real solution (which of course is unknown). The main result of the paper is the construction of an adaptive scheme which produces an approximation to u with error O(N -s ) in the energy norm, whenever such a rate is possible by N-term approximation. The range of s > 0 for which this holds is only limited by the approximation properties of the wavelets together with their ability to compress the elliptic operator. Moreover, it is shown that the number of arithmetic operations needed to compute the approximate solution stays proportional to N. The adaptive algorithm applies to a wide class of elliptic problems and wavelet bases. The analysis in this paper puts forward new techniques for treating elliptic problems as well as the linear systems of equations that arise from the wavelet discretization.


Numerische Mathematik | 2004

Adaptive Finite Element Methods with convergence rates

Peter Binev; Wolfgang Dahmen; Ronald A. DeVore

Summary.Adaptive Finite Element Methods for numerically solving elliptic equations are used often in practice. Only recently [12], [17] have these methods been shown to converge. However, this convergence analysis says nothing about the rates of convergence of these methods and therefore does, in principle, not guarantee yet any numerical advantages of adaptive strategies versus non-adaptive strategies. The present paper modifies the adaptive method of Morin, Nochetto, and Siebert [17] for solving the Laplace equation with piecewise linear elements on domains in ℝ2 by adding a coarsening step and proves that this new method has certain optimal convergence rates in the energy norm (which is equivalent to the H1 norm). Namely, it is shown that whenever s>0 and the solution u is such that for each n≥1, it can be approximated to accuracy O(n−s) in the energy norm by a continuous, piecewise linear function on a triangulation with n cells (using complete knowledge of u), then the adaptive algorithm constructs an approximation of the same type with the same asymptotic accuracy while using only information gained during the computational process. Moreover, the number of arithmetic computations in the proposed method is also of order O(n) for each n≥1. The construction and analysis of this adaptive method relies on the theory of nonlinear approximation.


Transactions of the American Mathematical Society | 1988

Interpolation of Besov-Spaces

Ronald A. DeVore; Vasil A. Popov

We investigate Besov spaces and their connection with dyadic spline approximation in Lp(Q), 0 < p < oo. Our main results are: the determination of the interpolation spaces between a pair of Besov spaces; an atomic decomposition for functions in a Besov space; the characterization of the class of functions which have certain prescribed degree of approximation by dyadic splines.


Constructive Approximation | 1993

On the Construction of Multivariate (pre) Wavelets

Carl de Boor; Ronald A. DeVore; Amos Ron

A new approach for the construction of wavelets and prewavelets onRd from multiresolution is presented. The method uses only properties of shift-invariant spaces and orthogonal projectors fromL2(Rd) onto these spaces, and requires neither decay nor stability of the scaling function. Furthermore, this approach allows a simple derivation of previous, as well as new, constructions of wavelets, and leads to a complete resolution of questions concerning the nature of the intersection and the union of a scale of spaces to be used in a multiresolution.


Siam Journal on Mathematical Analysis | 2011

Convergence Rates for Greedy Algorithms in Reduced Basis Methods

Peter Binev; Albert Cohen; Wolfgang Dahmen; Ronald A. DeVore; Guergana Petrova; Przemysław Wojtaszczyk

The reduced basis method was introduced for the accurate online evaluation of solutions to a parameter dependent family of elliptic PDEs. Abstractly, it can be viewed as determining a “good” n-dimensional space


Foundations of Computational Mathematics | 2002

Adaptive Wavelet Methods II—Beyond the Elliptic Case

Albert Cohen; Wolfgang Dahmen; Ronald A. DeVore

\mathcal{H}_n

Collaboration


Dive into the Ronald A. DeVore's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert C. Sharpley

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Peter Binev

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. G. Lorentz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amos Ron

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge