Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sahand Negahban is active.

Publication


Featured researches published by Sahand Negahban.


neural information processing systems | 2009

A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

Sahand Negahban; Bin Yu; Martin J. Wainwright; Pradeep Ravikumar

High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of structure (e.g., sparse vectors; block-structured matrices; low-rank matrices; Markov assumptions). In such settings, a general approach to estimation is to solve a regularized convex program (known as a regularized M-estimator) which combines a loss function (measuring how well the model fits the data) with some regularization function that encourages the assumed structure. The goal of this paper is to provide a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive several existing results, and also to obtain several new results on consistency and convergence rates. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure the corresponding regularized M-estimators have fast convergence rates.


Annals of Statistics | 2012

Fast global convergence of gradient methods for high-dimensional statistical recovery

Alekh Agarwal; Sahand Negahban; Martin J. Wainwright

Many statistical


IEEE Transactions on Information Theory | 2011

Simultaneous Support Recovery in High Dimensions: Benefits and Perils of Block

Sahand Negahban; Martin J. Wainwright

M


Monthly Notices of the Royal Astronomical Society | 2013

\ell _{1}/\ell _{\infty}

Henrik Brink; Joseph W. Richards; Dovi Poznanski; Joshua S. Bloom; John A. Rice; Sahand Negahban; Martin J. Wainwright

-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension


Neurocomputing | 2018

-Regularization

Uri Shaham; Yutaro Yamada; Sahand Negahban

\pdim


Operations Research | 2017

Using machine learning for discovery in synoptic survey imaging data

Sahand Negahban; Sewoong Oh; Devavrat Shah

to grow with (and possibly exceed) the sample size


Circulation-cardiovascular Quality and Outcomes | 2016

Understanding adversarial training: Increasing local stability of supervised models through robust optimization

Bobak Mortazavi; Nicholas S. Downing; Emily M. Bucholz; Kumar Dharmarajan; Ajay Manhapra; Shu-Xia Li; Sahand Negahban; Harlan M. Krumholz

\numobs


allerton conference on communication, control, and computing | 2015

Rank centrality: Ranking from pairwise comparisons

Yu Lu; Sahand Negahban

. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that projected gradient descent has a globally geometric rate of convergence up to the \emph{statistical precision} of the model, meaning the typical distance between the true unknown parameter


conference on information and knowledge management | 2012

Analysis of Machine Learning Techniques for Heart Failure Readmissions

Sahand Negahban; Benjamin I. P. Rubinstein; Jim Gemmell

\theta^*


allerton conference on communication, control, and computing | 2012

Individualized rank aggregation using nuclear norm regularization

Sahand Negahban; Devavrat Shah

and an optimal solution

Collaboration


Dive into the Sahand Negahban's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandros G. Dimakis

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Devavrat Shah

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ethan R. Elenberg

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Rajiv Khanna

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joydeep Ghosh

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge