Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amin Jalali is active.

Publication


Featured researches published by Amin Jalali.


IEEE Transactions on Information Theory | 2015

Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices

Samet Oymak; Amin Jalali; Maryam Fazel; Yonina C. Eldar; Babak Hassibi

Recovering structured models (e.g., sparse or group-sparse vectors, low-rank matrices) given a few linear observations have been well-studied recently. In various applications in signal processing and machine learning, the model of interest is structured in several ways, for example, a matrix that is simultaneously sparse and low rank. Often norms that promote the individual structures are known, and allow for recovery using an order-wise optimal number of measurements (e.g., 11 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, using multiobjective optimization with these norms can do no better, orderwise, than exploiting only one of the structures, thus revealing a fundamental limitation in sample complexity. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation. Further, specializing our results to the case of sparse and low-rank matrices, we show that a nonconvex formulation recovers the model from very few measurements (on the order of the degrees of freedom), whereas the convex problem combining the 11 and nuclear norms requires many more measurements, illustrating a gap between the performance of the convex and nonconvex recovery problems. Our framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles. This allows us to give sample complexity bounds for problems such as sparse phase retrieval and low-rank tensor completion.


Siam Journal on Optimization | 2017

Variational Gram Functions: Convex Analysis and Optimization

Amin Jalali; Maryam Fazel; Lin Xiao

We propose a new class of convex penalty functions, called \emph{variational Gram functions} (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study convexity for VGFs, and give efficient characterizations for their convex conjugates, subdifferentials, and proximal operators. We discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a common, yet simple, variational representation and the regularizer is a VGF. These algorithms enjoy a simple kernel trick, an efficient line search, as well as computational advantages over first order methods based on the subdifferential or proximal maps. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.


conference on decision and control | 2013

Noisy estimation of simultaneously structured models: Limitations of convex relaxation

Samet Oymak; Amin Jalali; Maryam Fazel; Babak Hassibi

Models or signals exhibiting low dimensional behavior (e.g., sparse signals, low rank matrices) play an important role in signal processing and system identification. In this paper, we focus on models that have multiple structures simultaneously; e.g., matrices that are both low rank and sparse, arising in phase retrieval, quadratic compressed sensing, and cluster detection in social networks. We consider the estimation of such models from observations corrupted by additive Gaussian noise. We provide tight upper and lower bounds on the mean squared error (MSE) of a convex denoising program that uses a combination of regularizers to induce multiple structures. In the case of low rank and sparse matrices, we quantify the gap between the MSE of the convex program and the best achievable error, and we present a simple (nonconvex) thresholding algorithm that outperforms its convex counterpart and achieves almost optimal MSE. This paper extends prior work on a different but related problem: recovering simultaneously structured models from noiseless compressed measurements, where bounds on the number of required measurements were given. The present work shows a similar fundamental limitation exists in a statistical denoising setting.


ieee global conference on signal and information processing | 2013

A convex method for learning d-valued models

Amin Jalali; Maryam Fazel

Learning structurally constrained models such as sparse or group-sparse vectors, low-rank matrices, etc. is an important topic in machine learning. In this work, we consider vectors with only a few distinct values which we call d-valued vectors. This structure is useful when there are relations between the covariates in a regression task, or similarity between features in a classification problem. We introduce the d-variation norm as a penalty to promote this structure, and obtain useful optimization tools for this norm, such as its proximal operator, computed by solving a convex quadratic program. Some extensions such as matrix norms have been presented. The usage of this norm in a classification problem has been exemplified.


international symposium on information theory | 2017

Error bounds for Bregman denoising and structured natural parameter estimation

Amin Jalali; James Saunderson; Maryam Fazel; Babak Hassibi

We analyze an estimator based on the Bregman divergence for recovery of structured models from additive noise. The estimator can be seen as a regularized maximum likelihood estimator for an exponential family where the natural parameter is assumed to be structured. For all such Bregman denoising estimators, we provide an error bound for a natural associated error measure. Our error bound makes it possible to analyze a wide range of estimators, such as those in proximal denoising and inverse covariance matrix estimation, in a unified manner. In the case of proximal denoising, we exactly recover the existing tight normalized mean squared error bounds. In sparse precision matrix estimation, our bounds provide optimal scaling with interpretable constants in terms of the associated error measure.


Archive | 2017

Subspace Clustering with Missing and Corrupted Data

Zachary B. Charles; Amin Jalali; Rebecca Willett


arXiv: Machine Learning | 2015

Relative Density and Exact Recovery in Heterogeneous Stochastic Block Models

Amin Jalali; Qiyang Han; Ioana Dumitriu; Maryam Fazel


arXiv: Machine Learning | 2018

Missing Data in Sparse Transition Matrix Estimation for Sub-Gaussian Vector Autoregressive Processes.

Amin Jalali; Rebecca Willett


advances in computing and communications | 2018

Sparse Transition Matrix Estimation for Sub-Gaussian Autoregressive Processes with Missing Data

Amin Jalali; Rebecca Willett


2018 IEEE Data Science Workshop (DSW) | 2018

SPARSE SUBSPACE CLUSTERING WITH MISSING AND CORRUPTED DATA

Zachary B. Charles; Amin Jalali; Rebecca Willett

Collaboration


Dive into the Amin Jalali's collaboration.

Top Co-Authors

Avatar

Maryam Fazel

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Rebecca Willett

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Babak Hassibi

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Samet Oymak

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zachary B. Charles

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

James Saunderson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yonina C. Eldar

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge