Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Grigas is active.

Publication


Featured researches published by Paul Grigas.


Mathematical Programming | 2016

New analysis and results for the Frank---Wolfe method

Robert M. Freund; Paul Grigas

We present new results for the Frank–Wolfe method (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-called FW gaps. Lastly, we present complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions.


Annals of Statistics | 2017

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

Robert M. Freund; Paul Grigas; Rahul Mazumder

In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS


knowledge discovery and data mining | 2017

Profit Maximization for Online Advertising Demand-Side Platforms

Paul Grigas; Alfonso Lobos; Zheng Wen; Kuang-chih Lee

_\varepsilon


Archive | 2013

New Analysis and Results for the Conditional Gradient Method

Robert M. Freund; Paul Grigas

) and least squares boosting (LS-Boost(


Siam Journal on Optimization | 2017

An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion

Robert M. Freund; Paul Grigas; Rahul Mazumder

\varepsilon


arXiv: Optimization and Control | 2017

Smart "Predict, then Optimize".

Adam N. Elmachtoub; Paul Grigas

)), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS


arXiv: Machine Learning | 2014

AdaBoost and Forward Stagewise Regression are First-Order Convex Optimization Methods

Robert M. Freund; Paul Grigas; Rahul Mazumder

_\varepsilon


arXiv: Optimization and Control | 2018

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

Robert M. Freund; Paul Grigas; Rahul Mazumder

that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost(


arXiv: Optimization and Control | 2018

Optimal Bidding, Allocation and Budget Spending for a Demand Side Platform Under Many Auction Types

Alfonso Lobos; Paul Grigas; Zheng Wen; Kuang-chih Lee

\varepsilon


Springer Berlin Heidelberg | 2014

New analysis and results for the Frank–Wolfe method

Robert M. Freund; Paul Grigas

) and FS

Collaboration


Dive into the Paul Grigas's collaboration.

Top Co-Authors

Avatar

Robert M. Freund

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rahul Mazumder

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alfonso Lobos

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge