Nicolas Vayatis
École normale supérieure de Cachan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicolas Vayatis.
IEEE Transactions on Information Theory | 2009
Stéphan Clémençon; Nicolas Vayatis
This paper investigates how recursive partitioning methods can be adapted to the bipartite ranking problem. In ranking, the pursued goal is global: based on past data, define an order on the whole input space X, so that positive instances take up the top ranks with maximum probability. The most natural way to order all instances consists of projecting the input data onto the real line through a real-valued scoring function s and use the natural order on R. The accuracy of the ordering induced by a candidate s is classically measured in terms of the ROC curve or the AUC. Here we discuss the design of tree-structured scoring functions obtained by recursively maximizing the AUC criterion. The connection with recursive piecewise linear approximation of the optimal ROC curve both in the L1-sense and in the Linfin-sense is highlighted. A novel tree-based algorithm for ranking, called TreeRank, is proposed. Consistency results and generalization bounds of functional nature are established for this ranking method, when considering either the L1 or Linfin distance. We also describe committee-based learning procedures using TreeRank as a ldquobase ranker,rdquo in order to overcome obvious drawbacks of such a top-down partitioning technique. Simulation results on artificial data are also displayed.
Problems of Information Transmission | 2005
Anatoli Juditsky; Alexander V. Nazin; Alexandre B. Tsybakov; Nicolas Vayatis
AbstractWe consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the ℓ1-constraint. It is defined by a stochastic version of the mirror descent algorithm which performs descent of the gradient type in the dual space with an additional averaging. The main result of the paper is an upper bound for the expected accuracy of the proposed estimator. This bound is of the order
conference on learning theory | 2005
Stéphan Clémençon; Gábor Lugosi; Nicolas Vayatis
european conference on machine learning | 2013
Emile Contal; David Buffoni; Alexandre Robicquet; Nicolas Vayatis
C\sqrt {(\log M)/t}
Machine Learning | 2011
Stéphan Clémençon; Marine Depecker; Nicolas Vayatis
IFAC Proceedings Volumes | 2008
Anatoly Juditsky; Alexander V. Nazin; Alexander Tsybakov; Nicolas Vayatis
with an explicit and small constant factor C, where M is the dimension of the problem and t stands for the sample size. A similar bound is proved for a more general setting, which covers, in particular, the regression model with squared loss.
arXiv: Fluid Dynamics | 2014
Themistoklis S. Stefanakis; Emile Contal; Nicolas Vayatis; Frédéric Dias; Costas E. Synolakis
A general model is proposed for studying ranking problems. We investigate learning methods based on empirical minimization of the natural estimates of the ranking risk. The empirical estimates are of the form of a U-statistic. Inequalities from the theory of U-statistics and U-processes are used to obtain performance bounds for the empirical risk minimizers. Convex risk minimization methods are also studied to give a theoretical framework for ranking algorithms based on boosting and support vector machines. Just like in binary classification, fast rates of convergence are achieved under certain noise assumption. General sufficient conditions are proposed in several special cases that guarantee fast rates of convergence.
Quality and Reliability Engineering International | 2016
Guillaume Merle; Jean-Marc Roussel; Jean-Jacques Lesage; Vianney Perchet; Nicolas Vayatis
In this paper, we consider the challenge of maximizing an unknown function f for which evaluations are noisy and are acquired with high cost. An iterative procedure uses the previous measures to actively select the next estimation of f which is predicted to be the most useful. We focus on the case where the function can be evaluated in parallel with batches of fixed size and analyze the benefit compared to the purely sequential procedure in terms of cumulative regret. We introduce the Gaussian Process Upper Confidence Bound and Pure Exploration algorithm (GP-UCB-PE) which combines the UCB strategy and Pure Exploration in the same batch of evaluations along the parallel iterations. We prove theoretical upper bounds on the regret with batches of size K for this procedure which show the improvement of the order of
european conference on machine learning | 2014
Rémi Lemonnier; Nicolas Vayatis
\sqrt{K}
Machine Learning | 2013
Stéphan Clémençon; Sylvain Robbiano; Nicolas Vayatis
for fixed iteration cost over purely sequential versions. Moreover, the multiplicative constants involved have the property of being dimension-free. We also confirm empirically the efficiency of GP-UCB-PE on real and synthetic problems compared to state-of-the-art competitors.