Guillaume Lecué
University of Marne-la-Vallée
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guillaume Lecué.
Journal of the European Mathematical Society | 2017
Guillaume Lecué; Shahar Mendelson
We prove that iid random vectors that satisfy a rather weak moment assumption can be used as measurement vectors in Compressed Sensing, and the number of measurements required for exact reconstruction is the same as the best possible estimate -- exhibited by a random gaussian matrix. We also prove that this moment condition is necessary, up to a
IEEE Transactions on Information Theory | 2011
Stéphane Gaïffas; Guillaume Lecué
\log \log
Bernoulli | 2013
Guillaume Lecué
factor. Applications to the Compatibility Condition and the Restricted Eigenvalue Condition in the noisy setup and to properties of neighbourly random polytopes are also discussed.
Annals of Statistics | 2014
Guillaume Lecué; Philippe Rigollet
We observe (Xi,Yi)i=1n where the Yis are real valued outputs and the Xis are m × T matrices. We observe a new entry X and we want to predict the output Y associated with it. We focus on the high-dimensional setting, where mT ≫ n. This includes the matrix completion problem with noise, as well as other problems. We consider linear prediction procedures based on different penalizations, involving a mixture of several norms: the nuclear norm, the Frobenius norm and the ℓ1-norm. For these procedures, we prove sharp oracle inequalities, using a statistical learning theory point of view. A surprising fact in our results is that the rates of convergence do not depend on m and T directly. The analysis is conducted without the usually considered incoherency condition on the unknown matrix or restricted isometry condition on the sampling operator. Moreover, our results are the first to give for this problem an analysis of penalization (such as nuclear norm penalization) as a regularization algorithm: our oracle inequalities prove that these procedures have a prediction accuracy close to the deterministic oracle one, given that the reguralization parameters are well-chosen.
Bernoulli | 2010
Guillaume Lecué; Shahar Mendelson
when fbis a function constructed using the data D. For the sake of simplicity, throughoutthis article, we restrict ourselves to functions fand random variables (X,Y) for which|Y|≤band |f(X)|≤balmost surely, for some fixed b≥0. Note that bdoes not have tobe known from the statistician for the construction of the procedures we are studying inthis note.Given a finite set F of real-valued measurable functions defined on X (usually calleda
Bernoulli | 2016
Guillaume Lecué; Shahar Mendelson
We consider a general supervised learning problem with strongly convex and Lipshitz loss and study the problem of model selection aggre- gation. In particular, given a finite dictionary functions (learners) together with the prior, we generalize the results obtained by Dai, Rigollet and Zhang (2012) for Gaussian regression with squared loss and fixed design to this learning setup. Specifically, we prove that the Q-aggregation pro- cedure outputs an estimator that satisfies optimal oracle inequalities both in expectation and with high probability. Our proof techniques somewhat depart from traditional proofs by making most of the standard arguments on the Laplace transform of the empirical process to be controlled. AMS 2000 subject classifications: Primary 62H25; secondary 62F04, 90C22.
Annales De L Institut Henri Poincare-probabilites Et Statistiques | 2013
Guillaume Lecué; Shahar Mendelson
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function
Archive | 2013
Guillaume Lecué; Shahar Mendelson
T
Probability Theory and Related Fields | 2009
Guillaume Lecué; Shahar Mendelson
and the learning class
Annals of Statistics | 2018
Pierre C. Bellec; Guillaume Lecué; Alexandre B. Tsybakov
F