Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Rudelson is active.

Publication


Featured researches published by Mark Rudelson.


foundations of computer science | 2005

Error correction via linear programming

Emmanuel J. Candès; Mark Rudelson; Terence Tao; Roman Vershynin

Suppose we wish to transmit a vector f ϵ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion by an error e. We do not know which entries are affected nor do we know how they are affected. Is it possible to recover f exactly from the corrupted m-dimensional vector y = Af + e?


Journal of the ACM | 2007

Sampling from large matrices: An approach through geometric functional analysis

Mark Rudelson; Roman Vershynin

We study random submatrices of a large matrix <i>A</i>. We show how to approximately compute <i>A</i> from its random submatrix of the smallest possible size <i>O</i>(<i>r</i>log <i>r</i>) with a small error in the spectral norm, where <i>r</i> = ‖<i>A</i>‖<sup>2</sup><sub><i>F</i></sub>/‖<i>A</i>‖<sup>2</sup><sub>2</sub> is the numerical rank of <i>A</i>. The numerical rank is always bounded by, and is a stable relaxation of, the rank of <i>A</i>. This yields an asymptotically optimal guarantee in an algorithm for computing <i>low-rank approximations</i> of <i>A</i>. We also prove asymptotically optimal estimates on the <i>spectral norm</i> and the <i>cut-norm</i> of random submatrices of <i>A</i>. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.


conference on information sciences and systems | 2006

Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements

Mark Rudelson; Roman Vershynin

This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.


International Mathematics Research Notices | 2005

Geometric approach to error-correcting codes and reconstruction of signals

Mark Rudelson; Roman Vershynin

The results of this paper can be stated in three equivalent ways—in terms of the sparse recovery problem, the error-correction problem, and the problem of existence of certain extremal (neighborly) polytopes. Error-correcting codes are used in modern technology to protect information from errors. Information is formed by finite words over some alphabet F. The encoder transforms an n-letter word x into an m-letter word y with m>n . The decoder must be able to recover x correctly when up to r letters of y are corrupted in any way. Such an encoder-decoder pair is called an (n, m, r)-error-correcting code. Development of algorithmically efficient error correcting codes has attracted attention of engineers, computer scientists, and applied mathematicians for the past five decades. Known constructions involve deep algebraic and combinatorial methods, see [34, 35, 36]. This paper develops an approach to error-correcting codes from the viewpoint of geometric functional analysis (asymptotic convex geometry). It thus belongs to a common ground of coding theory, signal processing, combinatorial geometry, and geometric functional analysis. Our argument, outlined in Section 3, may be of independent interest in geometric functional analysis. Our main focus will be on words over the alphabet F = R or C. In applications, these words may be formed of the coefficients of some signal (such as image or audio)


IEEE Transactions on Information Theory | 2013

Reconstruction From Anisotropic Random Measurements

Mark Rudelson; Shuheng Zhou

Random matrices are widely used in sparse recovery problems, and the relevant properties of matrices with i.i.d. entries are well understood. This paper discusses the recently introduced restricted eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guaranteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with sub-Gaussian rows and nontrivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries.


Positivity | 2000

Distances Between Non-symmetric Convex Bodies and the \(MM^* \)-estimate

Mark Rudelson

AbstractLet K, D be n-dimensional convex bodes. Define the distance between K and D as


arXiv: Functional Analysis | 2011

Non-asymptotic Theory of Random Matrices: Extreme Singular Values

Mark Rudelson; Roman Vershynin


Israel Journal of Mathematics | 1997

Contact points of convex bodies

Mark Rudelson

d(K,D) = \inf \{ \lambda |TK \subset D + x \subset \lambda \cdot TK\} ,


Duke Mathematical Journal | 2015

Delocalization of eigenvectors of random matrices with independent entries

Mark Rudelson; Roman Vershynin


Israel Journal of Mathematics | 1999

Almost orthogonal submatrices of an orthogonal matrix

Mark Rudelson

where the infimum is taken over all

Collaboration


Dive into the Mark Rudelson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam D. Smith

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Samorodnitsky

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge