Paul Grigas
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Grigas.
Mathematical Programming | 2016
Robert M. Freund; Paul Grigas
We present new results for the Frank–Wolfe method (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-called FW gaps. Lastly, we present complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions.
Annals of Statistics | 2017
Robert M. Freund; Paul Grigas; Rahul Mazumder
In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS
knowledge discovery and data mining | 2017
Paul Grigas; Alfonso Lobos; Zheng Wen; Kuang-chih Lee
_\varepsilon
Archive | 2013
Robert M. Freund; Paul Grigas
) and least squares boosting (LS-Boost(
Siam Journal on Optimization | 2017
Robert M. Freund; Paul Grigas; Rahul Mazumder
\varepsilon
arXiv: Optimization and Control | 2017
Adam N. Elmachtoub; Paul Grigas
)), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS
arXiv: Machine Learning | 2014
Robert M. Freund; Paul Grigas; Rahul Mazumder
_\varepsilon
arXiv: Optimization and Control | 2018
Robert M. Freund; Paul Grigas; Rahul Mazumder
that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost(
arXiv: Optimization and Control | 2018
Alfonso Lobos; Paul Grigas; Zheng Wen; Kuang-chih Lee
\varepsilon
Springer Berlin Heidelberg | 2014
Robert M. Freund; Paul Grigas
) and FS