Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samet Oymak is active.

Publication


Featured researches published by Samet Oymak.


IEEE Transactions on Information Theory | 2015

Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices

Samet Oymak; Amin Jalali; Maryam Fazel; Yonina C. Eldar; Babak Hassibi

Recovering structured models (e.g., sparse or group-sparse vectors, low-rank matrices) given a few linear observations have been well-studied recently. In various applications in signal processing and machine learning, the model of interest is structured in several ways, for example, a matrix that is simultaneously sparse and low rank. Often norms that promote the individual structures are known, and allow for recovery using an order-wise optimal number of measurements (e.g., 11 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, using multiobjective optimization with these norms can do no better, orderwise, than exploiting only one of the structures, thus revealing a fundamental limitation in sample complexity. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation. Further, specializing our results to the case of sparse and low-rank matrices, we show that a nonconvex formulation recovers the model from very few measurements (on the order of the degrees of freedom), whereas the convex problem combining the 11 and nuclear norms requires many more measurements, illustrating a gap between the performance of the convex and nonconvex recovery problems. Our framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles. This allows us to give sample complexity bounds for problems such as sparse phase retrieval and low-rank tensor completion.


international symposium on information theory | 2012

Recovery of sparse 1-D signals from the magnitudes of their Fourier transform

Kishore Jaganathan; Samet Oymak; Babak Hassibi

The problem of signal recovery from the autocorrelation, or equivalently, the magnitudes of the Fourier transform, is of paramount importance in various fields of engineering. In this work, for one-dimensional signals, we give conditions, which when satisfied, allow unique recovery from the autocorrelation with very high probability. In particular, for sparse signals, we develop two non-iterative recovery algorithms. One of them is based on combinatorial analysis, which we prove can recover signals up to sparsity o(n1/3) with very high probability, and the other is developed using a convex optimization based framework, which numerical simulations suggest can recover signals upto sparsity o(n1/2) with very high probability.


international symposium on information theory | 2011

A simplified approach to recovery conditions for low rank matrices

Samet Oymak; Karthik Mohan; Maryam Fazel; Babak Hassibi

Recovering sparse vectors and low-rank matrices from noisy linear measurements has been the focus of much recent research. Various reconstruction algorithms have been studied, including ℓ1 and nuclear norm minimization as well as ℓp minimization with p < 1. These algorithms are known to succeed if certain conditions on the measurement map are satisfied. Proofs for the recovery of matrices have so far been much more involved than in the vector case. In this paper, we show how several classes of recovery conditions can be extended from vectors to matrices in a simple and transparent way, leading to the best known restricted isometry and nullspace conditions for matrix recovery. Our results rely on the ability to “vectorize” matrices through the use of a key singular value inequality.


allerton conference on communication, control, and computing | 2013

The squared-error of generalized LASSO: A precise analysis

Samet Oymak; Christos Thrampoulidis; Babak Hassibi

We consider the problem of estimating an unknown but structured signal x<sub>0</sub> from its noisy linear observations y = Ax<sub>0</sub> + z ∈ ℝ<sup>m</sup>. To the structure of x<sub>0</sub> is associated a structure inducing convex function f(·). We assume that the entries of A are i.i.d. standard normal N(0, 1) and z ~ N(0, σ<sup>2</sup>I<sub>m</sub>). As a measure of performance of an estimate x* of x<sub>0</sub> we consider the “Normalized Square Error” (NSE) ∥x* - x<sub>0</sub>∥<sub>2</sub><sup>2</sup>/σ<sup>2</sup>. For sufficiently small σ, we characterize the exact performance of two different versions of the well known LASSO algorithm. The first estimator is obtained by solving the problem argmin<sub>x</sub> ∥y - Ax∥<sub>2</sub> + λf(x). As a function of λ, we identify three distinct regions of operation. Out of them, we argue that “R<sub>ON</sub>” is the most interesting one. When λ ∈ R<sub>ON</sub>, we show that the NSE is D<sub>f</sub>(x<sub>0</sub>, λ)/m-D<sub>f</sub>(x<sub>0</sub>, λ) for small σ, where D<sub>f</sub>(x<sub>0</sub>, λ) is the expected squared-distance of an i.i.d. standard normal vector to the dilated subdifferential λ · ∂f(x<sub>0</sub>). Secondly, we consider the more popular estimator argmin<sub>x</sub> 1/2∥y - Ax∥<sub>2</sub><sup>2</sup>. + στ f(x). We propose a formula for the NSE of this estimator by establishing a suitable mapping between this and the previous estimator over the region R<sub>ON</sub>. As a useful side result, we find explicit formulae for the optimal estimation performance and the optimal penalty parameters λ* and τ*.


international symposium on information theory | 2013

Sparse phase retrieval: Convex algorithms and limitations

Kishore Jaganathan; Samet Oymak; Babak Hassibi

We consider the problem of recovering signals from their power spectral densities. This is a classical problem referred to in literature as the phase retrieval problem, and is of paramount importance in many fields of applied sciences. In general, additional prior information about the signal is required to guarantee unique recovery as the mapping from signals to power spectral densities is not one-to-one. In this work, we assume that the underlying signals are sparse. Recently, semidefinite programming (SDP) based approaches were explored by various researchers. Simulations of these algorithms strongly suggest that signals upto O(n1/2- ϵ) sparsity can be recovered by this technique. In this work, we develop a tractable algorithm based on reweighted ℓ1-minimization that recovers a sparse signal from its power spectral density for significantly higher sparsities, which is unprecedented. We also discuss the limitations of the existing SDP algorithms and provide a combinatorial algorithm which requires significantly fewer ”phaseless” measurements to guarantee recovery.


Foundations of Computational Mathematics | 2016

Sharp MSE Bounds for Proximal Denoising

Samet Oymak; Babak Hassibi

Denoising has to do with estimating a signal


IEEE Transactions on Information Theory | 2018

Sharp Time–Data Tradeoffs for Linear Inverse Problems

Samet Oymak; Benjamin Recht; Mahdi Soltanolkotabi


international conference on acoustics, speech, and signal processing | 2012

Phase retrieval for sparse signals using rank minimization

Kishore Jaganathan; Samet Oymak; Babak Hassibi

\mathbf {x}_0


allerton conference on communication, control, and computing | 2012

On robust phase retrieval for sparse signals

Kishore Jaganathan; Samet Oymak; Babak Hassibi


international symposium on information theory | 2014

Simple error bounds for regularized noisy linear inverse problems

Christos Thrampoulidis; Samet Oymak; Babak Hassibi

x0 from its noisy observations

Collaboration


Dive into the Samet Oymak's collaboration.

Top Co-Authors

Avatar

Babak Hassibi

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christos Thrampoulidis

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kishore Jaganathan

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Amin Khajehnejad

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mahdi Soltanolkotabi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Benjamin Recht

University of California

View shared research outputs
Top Co-Authors

Avatar

Maryam Fazel

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Amin Jalali

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Ramya Korlakai Vinayak

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge