Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luc Devroye is active.

Publication


Featured researches published by Luc Devroye.


Journal of the American Statistical Association | 1987

Nonparametric Density Estimation: The L 1 View.

Luc Devroye; László Györfi

Differentiation of Integrals Consistency Lower Bounds for Rates of Convergence Rates of Convergence in L1 The Automatic Kernel Estimate: L1 and Pointwise Convergence Estimates Related to the Kernel Estimate and the Histogram Estimate Simulation, Inequalities, and Random Variate Generation The Transformed Kernel Estimate Applications in Discrimination Operations on Density Estimates Estimators Based on Orthogonal Series Index.


Archive | 2001

Combinatorial methods in density estimation

Luc Devroye; Gábor Lugosi

1. Introduction.- 1.1. References.- 2. Concentration Inequalities.- 2.1. Hoeffdings Inequality.- 2.2. An Inequality for the Expected Maximal Deviation.- 2.3. The Bounded Difference Inequality.- 2.4. Examples.- 2.5. Bibliographic Remarks.- 2.6. Exercises.- 2.7. References.- 3. Uniform Deviation Inequalities.- 3.1. The Vapnik-Chervonenkis Inequality.- 3.2. Covering Numbers and Chaining.- 3.3. Example: The Dvoretzky-Kiefer-Wolfowitz Theorem.- 3.4. Bibliographic Remarks.- 3.5. Exercises.- 3.6. References.- 4. Combinatorial Tools.- 4.1. Shatter Coefficients.- 4.2. Vapnik-Chervonenkis Dimension and Shatter Coefficients.- 4.3. Vapnik-Chervonenkis Dimension and Covering Numbers.- 4.4. Examples.- 4.5. Bibliographic Remarks.- 4.6. Exercises.- 4.7. References.- 5. Total Variation.- 5.1. Density Estimation.- 5.2. The Total Variation.- 5.3. Invariance.- 5.4. Mappings.- 5.5. Convolutions.- 5.6. Normalization.- 5.7. The Lebesgue Density Theorem.- 5.8. LeCams Inequality.- 5.9. Bibliographic Remarks.- 5.10. Exercises.- 5.11. References.- 6. Choosing a Density Estimate.- 6.1. Choosing Between Two Densities.- 6.2. Examples.- 6.3. Is the Factor of Three Necessary?.- 6.4. Maximum Likelihood Does not Work.- 6.5. L2 Distances Are To Be Avoided.- 6.6. Selection from k Densities.- 6.7. Examples Continued.- 6.8. Selection from an Infinite Class.- 6.9. Bibliographic Remarks.- 6.10. Exercises.- 6.11. References.- 7. Skeleton Estimates.- 7.1. Kolmogorov Entropy.- 7.2. Skeleton Estimates.- 7.3. Robustness.- 7.4. Finite Mixtures.- 7.5. Monotone Densities on the Hypercube.- 7.6. How To Make Gigantic Totally Bounded Classes.- 7.7. Bibliographic Remarks.- 7.8. Exercises.- 7.9. References.- 8. The Minimum Distance Estimate: Examples.- 8.1. Problem Formulation.- 8.2. Series Estimates.- 8.3. Parametric Estimates: Exponential Families.- 8.4. Neural Network Estimates.- 8.5. Mixture Classes, Radial Basis Function Networks.- 8.6. Bibliographic Remarks.- 8.7. Exercises.- 8.8. References.- 9. The Kernel Density Estimate.- 9.1. Approximating Functions by Convolutions.- 9.2. Definition of the Kernel Estimate.- 9.3. Consistency of the Kernel Estimate.- 9.4. Concentration.- 9.5. Choosing the Bandwidth.- 9.6. Choosing the Kernel.- 9.7. Rates of Convergence.- 9.8. Uniform Rate of Convergence.- 9.9. Shrinkage, and the Combination of Density Estimates.- 9.10. Bibliographic Remarks.- 9.11. Exercises.- 9.12. References.- 10. Additive Estimates and Data Splitting.- 10.1. Data Splitting.- 10.2. Additive Estimates.- 10.3. Histogram Estimates.- 10A. Bibliographic Remarks.- 10.5. Exercises.- 10.6. References.- 11. Bandwidth Selection for Kernel Estimates.- 11.1. The Kernel Estimate with Riemann Kernel.- 11.2. General Kernels, Kernel Complexity.- 11.3. Kernel Complexity: Univariate Examples.- 11.4. Kernel Complexity: Multivariate Kernels.- 11.5. Asymptotic Optimality.- 11.6. Bibliographic Remarks.- 11.7. Exercises.- 11.8. References.- 12. Multiparameter Kernel Estimates.- 12.1. Multivariate Kernel Estimates-Product Kernels.- 12.2. Multivariate Kernel Estimates-Ellipsoidal Kernels.- 12.3. Variable Kernel Estimates.- 12.4. Tree-Structured Partitions.- 12.5. Changepoints and Bump Hunting.- 12.6. Bibliographic Remarks.- 12.7. Exercises.- 12.8. References.- 13. Wavelet Estimates.- 13.1. Definitions.- 13.2. Smoothing.- 13.3. Thresholding.- 13.4. Soft Thresholding.- 13.5. Bibliographic Remarks.- 13.6. Exercises.- 13.7. References.- 14. The Transformed Kernel Estimate.- 14.1. The Transformed Kernel Estimate.- 14.2. Box-Cox Transformations.- 14.3. Piecewise Linear Transformations.- 14.4. Bibliographic Remarks.- 14.5. Exercises.- 14.6. References.- 15. Minimax Theory.- 15.1. Estimating a Density from One Data Point.- 15.2. The General Minimax Problem.- 15.3. Rich Classes.- 3.3. Example: The Dvoretzky-Kiefer-Wolfowitz Theorem.- 3.4. Bibliographic Remarks.- 3.5. Exercises.- 3.6. References.- 4. Combinatorial Tools.- 4.1. Shatter Coefficients.- 4.2. Vapnik-Chervonenkis Dimension and Shatter Coefficients.- 4.3. Vapnik-Chervonenkis Dimension and Covering Numbers.- 4.4. Examples.- 4.5. Bibliographic Remarks.- 4.6. Exercises.- 4.7. References.- 5. Total Variation.- 5.1. Density Estimation.- 5.2. The Total Variation.- 5.3. Invariance.- 5.4. Mappings.- 5.5. Convolutions.- 5.6. Normalization.- 5.7. The Lebesgue Density Theorem.- 5.8. LeCams Inequality.- 5.9. Bibliographic Remarks.- 5.10. Exercises.- 5.11. References.- 6. Choosing a Density Estimate.- 6.1. Choosing Between Two Densities.- 6.2. Examples.- 6.3. Is the Factor of Three Necessary?.- 6.4. Maximum Likelihood Does not Work.- 6.5. L2 Distances Are To Be Avoided.- 6.6. Selection from k Densities.- 6.7. Examples Continued.- 6.8. Selection from an Infinite Class.- 6.9. Bibliographic Remarks.- 6.10. Exercises.- 6.11. References.- 7. Skeleton Estimates.- 7.1. Kolmogorov Entropy.- 7.2. Skeleton Estimates.- 7.3. Robustness.- 7.4. Finite Mixtures.- 7.5. Monotone Densities on the Hypercube.- 7.6. How To Make Gigantic Totally Bounded Classes.- 7.7. Bibliographic Remarks.- 7.8. Exercises.- 7.9. References.- 8. The Minimum Distance Estimate: Examples.- 8.1. Problem Formulation.- 8.2. Series Estimates.- 8.3. Parametric Estimates: Exponential Families.- 8.4. Neural Network Estimates.- 8.5. Mixture Classes, Radial Basis Function Networks.- 8.6. Bibliographic Remarks.- 8.7. Exercises.- 8.8. References.- 9. The Kernel Density Estimate.- 9.1. Approximating Functions by Convolutions.- 9.2. Definition of the Kernel Estimate.- 9.3. Consistency of the Kernel Estimate.- 9.4. Concentration.- 9.5. Choosing the Bandwidth.- 9.6. Choosing the Kernel.- 9.7. Rates of Convergence.- 9.8. Uniform Rate of Convergence.- 9.9. Shrinkage, and the Combination of Density Estimates.- 9.10. Bibliographic Remarks.- 9.11. Exercises.- 9.12. References.- 10. Additive Estimates and Data Splitting.- 10.1. Data Splitting.- 10.2. Additive Estimates.- 10.3. Histogram Estimates.- 10A. Bibliographic Remarks.- 10.5. Exercises.- 10.6. References.- 11. Bandwidth Selection for Kernel Estimates.- 11.1. The Kernel Estimate with Riemann Kernel.- 11.2. General Kernels, Kernel Complexity.- 11.3. Kernel Complexity: Univariate Examples.- 11.4. Kernel Complexity: Multivariate Kernels.- 11.5. Asymptotic Optimality.- 11.6. Bibliographic Remarks.- 11.7. Exercises.- 11.8. References.- 12. Multiparameter Kernel Estimates.- 12.1. Multivariate Kernel Estimates-Product Kernels.- 12.2. Multivariate Kernel Estimates-Ellipsoidal Kernels.- 12.3. Variable Kernel Estimates.- 12.4. Tree-Structured Partitions.- 12.5. Changepoints and Bump Hunting.- 12.6. Bibliographic Remarks.- 12.7. Exercises.- 12.8. References.- 13. Wavelet Estimates.- 13.1. Definitions.- 13.2. Smoothing.- 13.3. Thresholding.- 13.4. Soft Thresholding.- 13.5. Bibliographic Remarks.- 13.6. Exercises.- 13.7. References.- 14. The Transformed Kernel Estimate.- 14.1. The Transformed Kernel Estimate.- 14.2. Box-Cox Transformations.- 14.3. Piecewise Linear Transformations.- 14.4. Bibliographic Remarks.- 14.5. Exercises.- 14.6. References.- 15. Minimax Theory.- 15.1. Estimating a Density from One Data Point.- 15.2. The General Minimax Problem.- 15.3. Rich Classes.- 15.4. Assouads Lemma.- 15.5. Example: The Class of Convex Densities.- 15.6. Additional Examples.- 15.7. Tuning the Parameters of Variable Kernel Estimates.- 15.8. Sufficient Statistics.- 15.9. Bibliographic Remarks.- 15.10. Exercises.- 15.11. References.- 16. Choosing the Kernel Order.- 16.1. Introduction.- 16.2. Standard Kernel Estimate: Riemann Kernels.- 16.3. Standard Kernel Estimates: General Kernels.- 16.4. An Infinite Family of Kernels.- 16.5. Bibliographic Remarks.- 16.6. Exercises.- 16.7. References.- 17. Bandwidth Choice with Superkernels.- 17.1. Superkernels.- 17.2. The Trapezoidal Kernel.- 17.3. Bandwidth Selection.- 17.4. Bibliographic Remarks.- 17.5. Exercises.- 17.6. References.- Author Index.


Journal of the ACM | 1986

A note on the height of binary search trees

Luc Devroye

Let <italic>H<subscrpt>n</subscrpt></italic> be the height of a binary search tree with <italic>n</italic> nodes constructed by standard insertions from a random permutation of 1, … , <italic>n</italic>. It is shown that <italic>H<subscrpt>n</subscrpt></italic>/log <italic>n</italic> → <italic>c</italic> = 4.31107 … in probability as <italic>n</italic> → ∞, where <italic>c</italic> is the unique solution of <italic>c</italic> log((2<italic>e</italic>)/<italic>c</italic>) = 1, <italic>c</italic> ≥ 2. Also, for all <italic>p</italic> > 0, lim<italic><subscrpt>n</italic>→∞</subscrpt><italic>E</italic>(<italic>H<supscrpt>p</supscrpt><subscrpt>n</subscrpt></italic>)/ log<italic><supscrpt>p</supscrpt>n</italic> = <italic>c<supscrpt>p</supscrpt></italic>. Finally, it is proved that <italic>S<subscrpt>n</subscrpt></italic>/log <italic>n</italic> → <italic>c</italic><supscrpt>*</supscrpt> = 0.3733 … , in probability, where <italic>c</italic><supscrpt>*</supscrpt> is defined by <italic>c</italic> log((2<italic>e</italic>)/<italic>c</italic>) = 1, <italic>c</italic> ≤ 1, and <italic>S<subscrpt>n</subscrpt></italic> is the saturation level of the same tree, that is, the number of full levels in the tree.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1988

Automatic pattern recognition: a study of the probability of error

Luc Devroye

A test sequence is used to select the best rule from a class of discrimination rules defined in terms of the training sequence. The Vapnik-Chervonenkis and related inequalities are used to obtain distribution-free bounds on the difference between the probability of error of the selected rule and the probability of error of the best rule in the given class. The bounds are used to prove the consistency and asymptotic optimality for several popular classes, including linear discriminators, nearest-neighbor rules, kernel-based rules, histogram rules, binary tree classifiers, and Fourier series classifiers. In particular, the method can be used to choose the smoothing parameter in kernel-based rules, to choose k in the k-nearest neighbor rule, and to choose between parametric and nonparametric rules. >


latin american symposium on theoretical informatics | 2002

On the Spanning Ratio of Gabriel Graphs and beta-skeletons

Prosenjit Bose; Luc Devroye; William S. Evans; David G. Kirkpatrick

The spanning ratio of a graph defined on n points in the Euclidean plane is the maximal ratio over all pairs of data points (u, v), of the minimum graph distance between u and v, over the Euclidean distance between u and v. A connected graph is said to be a k-spanner if the spanning ratio does not exceed k. For example, for any k, there exists a point set whose minimum spanning tree is not a k-spanner. At the other end of the spectrum, a Delaunay triangulation is guaranteed to be a 2.42- spanner[11]. For proximity graphs inbetween these two extremes, such as Gabriel graphs[8], relative neighborhood graphs[16] and s-skeletons[12] with s ? [0, 2] some interesting questions arise. We show that the spanning ratio for Gabriel graphs (which are s-skeletons with s = 1) is ?(?n) in the worst case. For all s-skeletons with s ? [0, 1], we prove that the spanning ratio is at most O(n?) where ? = (1 - log2(1 +?1 - s2))/2. For all s-skeletons with s ? [1, 2), we prove that there exist point sets whose spanning ratio is at least (1/2- o(1) ?n. For relative neighborhood graphs[16] (skeletons with s = 2), we show that there exist point sets where the spanning ratio is ?(n). For points drawn independently from the uniform distribution on the unit square, we show that the spanning ratio of the (random) Gabriel graph and all s-skeletons with s ? [1, 2] tends to ? in probability as ?log n/ log log n.


IEEE Transactions on Information Theory | 1979

Distribution-free performance bounds for potential function rules

Luc Devroye; Terry J. Wagner

In the discrimination problem the random variable \theta , known to take values in {1, \cdots ,M} , is estimated from the random vector X . All that is known about the joint distribution of (X, \theta) is that which can be inferred from a sample (X_{1}, \theta_{1}), \cdots ,(X_{n}, \theta_{n}) of size n drawn from that distribution. A discrimination nde is any procedure which determines a decision \hat{ \theta} for \theta from X and (X_{1}, \theta_{1}) , \cdots , (X_{n}, \theta_{n}) . For rules which are determined by potential functions it is shown that the mean-square difference between the probability of error for the nde and its deleted estimate is bounded by A/ \sqrt{n} where A is an explicitly given constant depending only on M and the potential function. The O(n ^{-1/2}) behavior is shown to be the best possible for one of the most commonly encountered rules of this type.


Siam Journal on Applied Mathematics | 1980

Detection of Abnormal Behavior Via Nonparametric Estimation of the Support

Luc Devroye; Gary L. Wise

In this paper two problems are considered, both involving the nonparametric estimation of the support of a random vector from a sequence of independent identically distributed observations. In the first problem, after observing n independent random vectors with a common unknown distribution


Acta Informatica | 1987

Branching processes in the analysis of the heights of trees

Luc Devroye

\mu


Archive | 1991

EXPONENTIAL INEQUALITIES IN NONPARAMETRIC ESTIMATION

Luc Devroye

, we are given one new measurement and we wish to know whether or not it belongs to the support of


IEEE Transactions on Information Theory | 1978

The uniform convergence of nearest neighbor regression function estimators and their application in optimization

Luc Devroye

\mu

Collaboration


Dive into the Luc Devroye's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

László Györfi

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph Neininger

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terry J. Wagner

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Claude Gravel

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Nicolas Broutin

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge