Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Rigollet is active.

Publication


Featured researches published by Philippe Rigollet.


Annals of Statistics | 2011

Exponential Screening and optimal rates of sparse estimation

Philippe Rigollet; Alexandre B. Tsybakov

In high-dimensional linear regression, the goal pursued here is to estimate an unknown regression function using linear combinations of a suitable set of covariates. One of the key assumptions for the success of any statistical procedure in this setup is to assume that the linear combination is sparse in some sense, for example, that it involves only few covariates. We consider a general, non necessarily linear, regression with Gaussian noise and study a related question that is to find a linear combination of approximating functions, which is at the same time sparse and has small mean squared error (MSE). We introduce a new estimation procedure, called Exponential Screening that shows remarkable adaptation properties. It adapts to the linear combination that optimally balances MSE and sparsity, whether the latter is measured in terms of the number of non-zero entries in the combination (


Annals of Statistics | 2013

Optimal detection of sparse principal components in high dimension

Quentin Berthet; Philippe Rigollet

\ell_0


Annals of Statistics | 2008

Learning by mirror averaging

Anatoli Juditsky; Philippe Rigollet; Alexandre B. Tsybakov

norm) or in terms of the global weight of the combination (


Mathematical Methods of Statistics | 2007

Linear and convex aggregation of density estimators

Philippe Rigollet; Alexandre B. Tsybakov

\ell_1


Statistical Science | 2012

Sparse estimation by exponential weighting

Philippe Rigollet; Alexandre B. Tsybakov

norm). The power of this adaptation result is illustrated by showing that Exponential Screening solves optimally and simultaneously all the problems of aggregation in Gaussian regression that have been discussed in the literature. Moreover, we show that the performance of the Exponential Screening estimator cannot be improved in a minimax sense, even if the optimal sparsity is known in advance. The theoretical and numerical superiority of Exponential Screening compared to state-of-the-art sparse procedures is also discussed.


Bernoulli | 2009

Optimal rates for plug-in estimators of density level sets

Philippe Rigollet; Régis Vert

We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix. Our minimax optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NP-complete in general, and we describe a computationally efficient alternative test using convex relaxations. Our relaxation is also proved to detect sparse principal components at near optimal detection levels, and it performs well on simulated datasets. Moreover, using polynomial time reductions from theoretical computer science, we bring significant evidence that our results cannot be improved, thus revealing an inherent trade off between statistical and computational performance.


Annals of Statistics | 2013

The multi-armed bandit problem with covariates

Vianney Perchet; Philippe Rigollet

Given a collection of


Annals of Statistics | 2012

Kullback–Leibler aggregation and misspecified generalized linear models

Philippe Rigollet

M


Annals of Statistics | 2012

Deviation optimal learning using greedy

Dong Dai; Philippe Rigollet; Tong Zhang

different estimators or classifiers, we study the problem of model selection type aggregation, i.e., we construct a new estimator or classifier, called aggregate, which is nearly as good as the best among them with respect to a given risk criterion. We define our aggregate by a simple recursive procedure which solves an auxiliary stochastic linear programming problem related to the original non-linear one and constitutes a special case of the mirror averaging algorithm. We show that the aggregate satisfies sharp oracle inequalities under some general assumptions. The results allow one to construct in an easy way sharp adaptive nonparametric estimators for several problems including regression, classification and density estimation.


Annals of Statistics | 2014

Q

Guillaume Lecué; Philippe Rigollet

We study the problem of finding the best linear and convex combination of M estimators of a density with respect to the mean squared risk. We suggest aggregation procedures and we prove sharp oracle inequalities for their risks, i.e., oracle inequalities with leading constant 1. We also obtain lower bounds showing that these procedures attain optimal rates of aggregation. As an example, we consider aggregation of multivariate kernel density estimators with different bandwidths. We show that linear and convex aggregates mimic the kernel oracles in asymptotically exact sense. We prove that, for Pinsker’s kernel, the proposed aggregates are sharp asymptotically minimax simultaneously over a large scale of Sobolev classes of densities. Finally, we provide simulations demonstrating performance of the convex aggregation procedure.

Collaboration


Dive into the Philippe Rigollet's collaboration.

Top Co-Authors

Avatar

Jonathan Weed

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vianney Perchet

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ankur Moitra

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John C. Urschel

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Xin Tong

Princeton University

View shared research outputs
Researchain Logo
Decentralizing Knowledge