Sahand Negahban
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sahand Negahban.
neural information processing systems | 2009
Sahand Negahban; Bin Yu; Martin J. Wainwright; Pradeep Ravikumar
High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of structure (e.g., sparse vectors; block-structured matrices; low-rank matrices; Markov assumptions). In such settings, a general approach to estimation is to solve a regularized convex program (known as a regularized M-estimator) which combines a loss function (measuring how well the model fits the data) with some regularization function that encourages the assumed structure. The goal of this paper is to provide a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive several existing results, and also to obtain several new results on consistency and convergence rates. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure the corresponding regularized M-estimators have fast convergence rates.
Annals of Statistics | 2012
Alekh Agarwal; Sahand Negahban; Martin J. Wainwright
Many statistical
IEEE Transactions on Information Theory | 2011
Sahand Negahban; Martin J. Wainwright
M
Monthly Notices of the Royal Astronomical Society | 2013
Henrik Brink; Joseph W. Richards; Dovi Poznanski; Joshua S. Bloom; John A. Rice; Sahand Negahban; Martin J. Wainwright
-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension
Neurocomputing | 2018
Uri Shaham; Yutaro Yamada; Sahand Negahban
\pdim
Operations Research | 2017
Sahand Negahban; Sewoong Oh; Devavrat Shah
to grow with (and possibly exceed) the sample size
Circulation-cardiovascular Quality and Outcomes | 2016
Bobak Mortazavi; Nicholas S. Downing; Emily M. Bucholz; Kumar Dharmarajan; Ajay Manhapra; Shu-Xia Li; Sahand Negahban; Harlan M. Krumholz
\numobs
allerton conference on communication, control, and computing | 2015
Yu Lu; Sahand Negahban
. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that projected gradient descent has a globally geometric rate of convergence up to the \emph{statistical precision} of the model, meaning the typical distance between the true unknown parameter
conference on information and knowledge management | 2012
Sahand Negahban; Benjamin I. P. Rubinstein; Jim Gemmell
\theta^*
allerton conference on communication, control, and computing | 2012
Sahand Negahban; Devavrat Shah
and an optimal solution