Roman Vershynin
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roman Vershynin.
Foundations of Computational Mathematics | 2009
Deanna Needell; Roman Vershynin
This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements—L1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm, ROMP, reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the uniform uncertainty principle.
IEEE Journal of Selected Topics in Signal Processing | 2010
Deanna Needell; Roman Vershynin
We demonstrate a simple greedy algorithm that can reliably recover a vector <i>v</i> ¿ ¿<sup>d</sup> from incomplete and inaccurate measurements <i>x</i> = ¿<i>v</i> + <i>e</i>. Here, ¿ is a <i>N</i> x <i>d</i> measurement matrix with <i>N</i><<d, and <i>e</i> is an error vector. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to provide the benefits of the two major approaches to sparse recovery. It combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods. For any measurement matrix ¿ that satisfies a quantitative restricted isometry principle, ROMP recovers a signal <i>v</i> with <i>O</i>(<i>n</i>) nonzeros from its inaccurate measurements <i>x</i> in at most <i>n</i> iterations, where each iteration amounts to solving a least squares problem. The noise level of the recovery is proportional to ¿{log<i>n</i>} ||<i>e</i>||<sub>2</sub>. In particular, if the error term <i>e</i> vanishes the reconstruction is exact.
foundations of computer science | 2005
Emmanuel J. Candès; Mark Rudelson; Terence Tao; Roman Vershynin
Suppose we wish to transmit a vector f ϵ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion by an error e. We do not know which entries are affected nor do we know how they are affected. Is it possible to recover f exactly from the corrupted m-dimensional vector y = Af + e?
symposium on the theory of computing | 2007
Anna C. Gilbert; M. Strauss; Joel A. Tropp; Roman Vershynin
Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no otheralgorithm in the literature simultaneously achieves all four of these desiderata.
Journal of the ACM | 2007
Mark Rudelson; Roman Vershynin
We study random submatrices of a large matrix <i>A</i>. We show how to approximately compute <i>A</i> from its random submatrix of the smallest possible size <i>O</i>(<i>r</i>log <i>r</i>) with a small error in the spectral norm, where <i>r</i> = ‖<i>A</i>‖<sup>2</sup><sub><i>F</i></sub>/‖<i>A</i>‖<sup>2</sup><sub>2</sub> is the numerical rank of <i>A</i>. The numerical rank is always bounded by, and is a stable relaxation of, the rank of <i>A</i>. This yields an asymptotically optimal guarantee in an algorithm for computing <i>low-rank approximations</i> of <i>A</i>. We also prove asymptotically optimal estimates on the <i>spectral norm</i> and the <i>cut-norm</i> of random submatrices of <i>A</i>. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.
conference on information sciences and systems | 2006
Mark Rudelson; Roman Vershynin
This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.
IEEE Transactions on Information Theory | 2013
Yaniv Plan; Roman Vershynin
This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n/s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.
International Mathematics Research Notices | 2005
Mark Rudelson; Roman Vershynin
The results of this paper can be stated in three equivalent ways—in terms of the sparse recovery problem, the error-correction problem, and the problem of existence of certain extremal (neighborly) polytopes. Error-correcting codes are used in modern technology to protect information from errors. Information is formed by finite words over some alphabet F. The encoder transforms an n-letter word x into an m-letter word y with m>n . The decoder must be able to recover x correctly when up to r letters of y are corrupted in any way. Such an encoder-decoder pair is called an (n, m, r)-error-correcting code. Development of algorithmically efficient error correcting codes has attracted attention of engineers, computer scientists, and applied mathematicians for the past five decades. Known constructions involve deep algebraic and combinatorial methods, see [34, 35, 36]. This paper develops an approach to error-correcting codes from the viewpoint of geometric functional analysis (asymptotic convex geometry). It thus belongs to a common ground of coding theory, signal processing, combinatorial geometry, and geometric functional analysis. Our argument, outlined in Section 3, may be of independent interest in geometric functional analysis. Our main focus will be on words over the alphabet F = R or C. In applications, these words may be formed of the coefficients of some signal (such as image or audio)
Proceedings of the American Mathematical Society | 2005
Peter G. Casazza; Ole Christensen; Alexander Lindner; Roman Vershynin
We show that the conjectured generalization of the Bourgain-Tzafriri restricted-invertibility theorem is equivalent to the conjecture of Feichtinger, stating that every bounded frame can be written as a finite union of Riesz basic sequences. We prove that any bounded frame can at least be written as a finite union of linearly independent sequences. We further show that the two conjectures are implied by the paving conjecture. Finally, we show that Weyl-Heisenberg frames over rational lattices are finite unions of Riesz basic sequences.
Probability Theory and Related Fields | 2016
Olivier Guédon; Roman Vershynin
We present a simple and flexible method to prove consistency of semidefinite optimization problems on random graphs. The method is based on Grothendieck’s inequality. Unlike the previous uses of this inequality that lead to constant relative accuracy, we achieve any given relative accuracy by leveraging randomness. We illustrate the method with the problem of community detection in sparse networks, those with bounded average degrees. We demonstrate that even in this regime, various simple and natural semidefinite programs can be used to recover the community structure up to an arbitrarily small fraction of misclassified vertices. The method is general; it can be applied to a variety of stochastic models of networks and semidefinite programs.