Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Recht is active.

Publication


Featured researches published by Benjamin Recht.


Foundations of Computational Mathematics | 2009

Exact Matrix Completion via Convex Optimization

Emmanuel J. Candès; Benjamin Recht

We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen?We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys


Siam Review | 2010

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Benjamin Recht; Maryam Fazel; Pablo A. Parrilo


Foundations of Computational Mathematics | 2012

The Convex Geometry of Linear Inverse Problems

Venkat Chandrasekaran; Benjamin Recht; Pablo A. Parrilo; Alan S. Willsky

m\ge C\,n^{1.2}r\log n


Communications of The ACM | 2012

Exact matrix completion via convex optimization

Emmanuel J. Candès; Benjamin Recht


IEEE Transactions on Information Theory | 2013

Compressed Sensing Off the Grid

Gongguo Tang; Badri Narayan Bhaskar; Parikshit Shah; Benjamin Recht

for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.


Inverse Problems | 2011

Tensor completion and low-n-rank tensor recovery via convex optimization

Silvia Gandy; Benjamin Recht; Isao Yamada

The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.


allerton conference on communication, control, and computing | 2010

Online identification and tracking of subspaces from highly incomplete information

Laura Balzano; Robert D. Nowak; Benjamin Recht

In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.


IEEE Transactions on Information Theory | 2014

Blind Deconvolution Using Convex Programming

Ali Ahmed; Benjamin Recht; Justin K. Romberg

Suppose that one observes an incomplete subset of entries selected from a low-rank matrix. When is it possible to complete the matrix and recover the entries that have not been seen? We demonstrate that in very general settings, one can perfectly recover all of the missing entries from most sufficiently large subsets by solving a convex programming problem that finds the matrix with the minimum nuclear norm agreeing with the observed entries. The techniques used in this analysis draw upon parallels in the field of compressed sensing, demonstrating that objects other than signals and images can be perfectly reconstructed from very limited information.


IEEE Transactions on Signal Processing | 2013

Atomic Norm Denoising With Applications to Line Spectral Estimation

Badri Narayan Bhaskar; Gongguo Tang; Benjamin Recht

This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(slog s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method.


Mathematical Programming | 2011

Null space conditions and thresholds for rank minimization

Benjamin Recht; Weiyu Xu; Babak Hassibi

In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers.

Collaboration


Dive into the Benjamin Recht's collaboration.

Top Co-Authors

Avatar

Stephen Tu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Badri Narayan Bhaskar

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gongguo Tang

Colorado School of Mines

View shared research outputs
Top Co-Authors

Avatar

Max Simchowitz

University of California

View shared research outputs
Top Co-Authors

Avatar

Horia Mania

University of California

View shared research outputs
Top Co-Authors

Avatar

Robert D. Nowak

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ross Boczar

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge