Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael P. Friedlander is active.

Publication


Featured researches published by Michael P. Friedlander.


SIAM Journal on Scientific Computing | 2008

Probing the Pareto Frontier for Basis Pursuit Solutions

Ewout van den Berg; Michael P. Friedlander

The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.


IEEE Transactions on Information Theory | 2010

Theoretical and Empirical Results for Recovery From Multiple Measurements

Ewout van den Berg; Michael P. Friedlander

The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We study the recovery properties of two algorithms for problems with noiseless data and exact-sparse representation. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform-recovery rate of sequential SMV using l 1 minimization, and that there are problems that can be solved with one approach, but not the other. Second, we study the performance of the ReMBo algorithm (M. Mishali and Y. Eldar, ¿Reduce and boost: Recovering arbitrary sets of jointly sparse vectors,¿ IEEE Trans. Signal Process., vol. 56, no. 10, 4692-4702, Oct. 2008) in combination with l 1 minimization, and show how recovery improves as more measurements are taken. From this analysis, it follows that having more measurements than the number of linearly independent nonzero rows does not improve the potential theoretical recovery rate.


IEEE Transactions on Information Theory | 2012

Recovering Compressively Sampled Signals Using Partial Support Information

Michael P. Friedlander; Hassan Mansour; Rayan Saab; Ozgur Yilmaz

We study recovery conditions of weighted l1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least 50% of the (partial) support information is accurate, then weighted l1 minimization is stable and robust under weaker sufficient conditions than the analogous conditions for standard l1 minimization. Moreover, weighted l1 minimization provides better upper bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic data and real audio and video signals.


Siam Journal on Optimization | 2011

Sparse Optimization with Least-Squares Constraints

Ewout van den Berg; Michael P. Friedlander

The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a general class of sparsifying formulations. For several common types of sparsity we provide applications, along with details on how to apply the algorithm, and experimental results.


SIAM Journal on Scientific Computing | 2012

Hybrid Deterministic-Stochastic Methods for Data Fitting

Michael P. Friedlander; Mark W. Schmidt

Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum; these methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental-gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.


Siam Journal on Optimization | 2007

Exact Regularization of Convex Programs

Michael P. Friedlander; Paul Tseng

The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266-273] involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas [Math. Programming, 9 (1975), pp. 87-99] and by Bertsekas, Nedic, and Ozdaglar [Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003]. When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a “weak sharp minimum” is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the


Siam Journal on Optimization | 2005

A two-sided relaxation scheme for Mathematical Programs with Equilibrium Constraints

Victor DeMiguel; Michael P. Friedlander; Francisco J. Nogales; Stefan Scholtes

\ell_1


Optimization Methods & Software | 2008

Computing non-negative tensor factorizations

Michael P. Friedlander; Kathrin Hatz

regularization of benchmark (degenerate) linear programs and semidefinite/second-order cone programs. The experiments demonstrate the usefulness of


IEEE Signal Processing Magazine | 2012

Fighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology

Felix J. Herrmann; Michael P. Friedlander; Ozgur Yilmaz

\ell_1


Geophysics | 2008

New insights into one-norm solvers from the Pareto curve

Gilles Hennenfent; Ewout van den Berg; Michael P. Friedlander; Felix J. Herrmann

regularization in finding sparse solutions.

Collaboration


Dive into the Michael P. Friedlander's collaboration.

Top Co-Authors

Avatar

Ewout van den Berg

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Mark W. Schmidt

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix J. Herrmann

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ives Macêdo

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Ting Kei Pong

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Gabriel Goh

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ozgur Yilmaz

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James V. Burke

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge