Germain Van Bever
Open University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Germain Van Bever.
Journal of the American Statistical Association | 2013
Davy Paindaveine; Germain Van Bever
Aiming at analyzing multimodal or nonconvexly supported distributions through data depth, we introduce a local extension of depth. Our construction is obtained by conditioning the distribution to appropriate depth-based neighborhoods and has the advantages, among others, of maintaining affine-invariance and applying to all depths in a generic way. Most importantly, unlike their competitors, which (for extreme localization) rather measure probability mass, the resulting local depths focus on centrality and remain of a genuine depth nature at any locality level. We derive their main properties, establish consistency of their sample versions, and study their behavior under extreme localization. We present two applications of the proposed local depth (for classification and for symmetry testing), and we extend our construction to the regression depth context. Throughout, we illustrate the results on several datasets, both artificial and real, univariate and multivariate. Supplementary materials for this article are available online.
Bernoulli | 2015
Davy Paindaveine; Germain Van Bever
We introduce a class of depth-based classification procedures that are of a nearest-neighbor na- ture. Depth, after symmetrization, indeed provides the center-outward ordering that is necessary and sufficient to define nearest neighbors. Like all their depth-based competitors, the resulting classifiers are affine-invariant, hence in particular are insensitive to unit changes. Unlike the former, however, the latter achieve Bayes consistency under virtually any absolutely continuous distributions—a concept we call nonparametric consistency, to stress the difference with the stronger universal consistency of the standard kNN classifiers. We investigate the finite-sample performances of the proposed classifiers through simulations and show that they outperform affine-invariant nearest-neighbor classifiers obtained through an obvious standardization construction. We illustrate the practical value of our classifiers on two real data examples. Finally, we shortly discuss the possible uses of our depth-based neighbors in other inference problems.
Journal of Multivariate Analysis | 2014
Davy Paindaveine; Germain Van Bever
The minimum covariance determinant (MCD) estimator of scatter is one of the most famous robust procedures for multivariate scatter. Despite the quite important research activity related to this estimator, culminating in the recent thorough asymptotic study of Cator and Lopuhaa (2010, 2012), no results have been obtained on the corresponding estimator of shape, which is the parameter of interest in many multivariate problems (including principal component analysis, canonical correlation analysis, testing for sphericity, etc.) In this paper, we therefore propose and study MCD-based inference procedures for shape, that inherit the good robustness properties of the MCD. The main emphasis is on asymptotic results, for point estimation (Bahadur representation and asymptotic normality results) as well as for hypothesis testing (asymptotic distributions under the null and under local alternatives). Influence functions of the MCD-estimators of shape are obtained as a corollary. Monte-Carlo studies illustrate our asymptotic results and assess the robustness of the proposed procedures.
Annals of Statistics | 2018
Davy Paindaveine; Germain Van Bever
We propose halfspace depth concepts for scatter, concentration and shape matrices. For scatter matrices, our concept extends the one from Chen, Gao and Ren (2015) to the non-centered case, and is in the same spirit as the one in Zhang (2002). Rather than focusing, as in these earlier works, on deepest scatter matrices, we thoroughly investigate the properties of the proposed depth and of the corresponding depth regions. We do so under minimal assumptions and, in particular, we do not restrict to elliptical distributions nor to absolutely continuous distributions. Interestingly, fully understanding scatter halfspace depth requires considering different geometries/topologies on the space of scatter matrices. We also discuss, in the spirit of Zuo and Serfling (2000), the structural properties a scatter depth should satisfy, and investigate whether or not these are met by the proposed depth. As mentioned above, companion concepts of depth for concentration matrices and shape matrices are also proposed and studied. We illustrate the practical relevance of the proposed concepts by considering a real-data example from finance.
Entropy | 2016
Paul P. Marriott; Radka Sabolova; Germain Van Bever; Frank Critchley
This paper takes an information-geometric approach to the challenging issue of goodness-of-fit testing in the high dimensional, low sample size context where—potentially—boundary effects dominate. The main contributions of this paper are threefold: first, we present and prove two new theorems on the behaviour of commonly used test statistics in this context; second, we investigate—in the novel environment of the extended multinomial model—the links between information geometry-based divergences and standard goodness-of-fit statistics, allowing us to formalise relationships which have been missing in the literature; finally, we use simulation studies to validate and illustrate our theoretical results and to explore currently open research questions about the way that discretisation effects can dominate sampling distributions near the boundary. Novelly accommodating these discretisation effects contrasts sharply with the essentially continuous approach of skewness and other corrections flowing from standard higher-order asymptotic analysis.
International Conference on Networked Geometric Science of Information | 2015
Paul Marriott; Radka Sabolova; Germain Van Bever; Frank Critchley
We introduce a new approach to goodness-of-fit testing in the high dimensional, sparse extended multinomial context. The paper takes a computational information geometric approach, extending classical higher order asymptotic theory. We show why the Wald – equivalently, the Pearson \(\chi ^2\) and score statistics – are unworkable in this context, but that the deviance has a simple, accurate and tractable sampling distribution even for moderate sample sizes. Issues of uniformity of asymptotic approximations across model space are discussed. A variety of important applications and extensions are noted.
Archive | 2015
Giuseppe Bove; Frank Critchley; Radka Sabolova; Germain Van Bever
The analysis of variance plays a fundamental role in statistical theory and practice, the standard Euclidean geometric form being particularly well established. The geometry and associated linear algebra underlying such standard analysis of variance methods permit, essentially direct, generalisation to other settings. Specifically, as jointly developed here: (a) to minimum distance estimation problems associated with subsets of pairwise orthogonal subspaces; (b) to matrix, rather than vector, contexts; and (c) to general, not just standard Euclidean, inner products, and their induced distance functions. To make such generalisation, we solve the following problem: given a set of nontrivial subspaces of a linear space, any two of which meet only at its origin, exactly which inner products make these subspaces pairwise orthogonal? Applications in a variety of areas are highlighted, including: (i) the analysis of asymmetry, and (ii) asymptotic comparisons in Invariant Coordinate Selection and Independent Component Analysis. A variety of possible further generalisations and applications are noted.
arXiv: Statistics Theory | 2017
Davy Paindaveine; Germain Van Bever
Statistics & Probability Letters | 2017
Davy Paindaveine; Germain Van Bever
Statistical Methods and Applications | 2015
Davy Paindaveine; Germain Van Bever