Joel A. Tropp
California Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joel A. Tropp.
IEEE Transactions on Information Theory | 2007
Joel A. Tropp; Anna C. Gilbert
This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
IEEE Transactions on Information Theory | 2004
Joel A. Tropp
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donohos basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Siam Review | 2011
Nathan Halko; Per-Gunnar Martinsson; Joel A. Tropp
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the
IEEE Transactions on Information Theory | 2006
Joel A. Tropp
k
Signal Processing | 2006
Joel A. Tropp; Anna C. Gilbert; M. Strauss
dominant components of the singular value decomposition of an
Proceedings of the IEEE | 2010
Joel A. Tropp; Stephen J. Wright
m \times n
IEEE Transactions on Information Theory | 2010
Joel A. Tropp; Jason N. Laska; Marco F. Duarte; Justin K. Romberg; Richard G. Baraniuk
matrix. (i) For a dense input matrix, randomized algorithms require
Foundations of Computational Mathematics | 2012
Joel A. Tropp
\bigO(mn \log(k))
Communications of The ACM | 2010
Deanna Needell; Joel A. Tropp
floating-point operations (flops) in contrast to
Signal Processing | 2006
Joel A. Tropp
\bigO(mnk)