Pierre Vandekerkhove
University of Marne-la-Vallée
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pierre Vandekerkhove.
Annals of Statistics | 2006
Laurent Bordes; Stephane Mottelet; Pierre Vandekerkhove
Suppose that univariate data are drawn from a mixture of two distributions that are equal up to a shift parameter. Such a model is known to be nonidentifiable from a nonparametric viewpoint. However, if we assume that the unknown mixed distribution is symmetric, we obtain the identifiability of this model, which is then defined by four unknown parameters: the mixing proportion, two location parameters and the cumulative distribution function of the symmetric mixed distribution. We propose estimators for these four parameters when no training data is available. Our estimators are shown to be strongly consistent under mild regularity assumptions and their convergence rates are studied. Their finite-sample properties are illustrated by a Monte Carlo study and our method is applied to real data.
Computational Statistics & Data Analysis | 2007
Laurent Bordes; Didier Chauveau; Pierre Vandekerkhove
Recently, there has been a considerable interest in finite mixture models with semi-/non-parametric component distributions. Identifiability of such model parameters is generally not obvious, and when it occurs, inference methods are rather specific to the mixture model under consideration. Hence, a generalization of the EM algorithm to semiparametric mixture models is proposed. The approach is methodological and can be applied to a wide class of semiparametric mixture models. The behavior of the proposed EM type estimators is studied numerically not only through several Monte-Carlo experiments but also through comparison with alternative methods existing in the literature. In addition to these numerical experiments, applications to real data are provided, showing that the estimation method behaves well, that it is fast and easy to be implemented.
Journal of Nonparametric Statistics | 2012
David R. Hunter; Didier Chauveau; Pierre Vandekerkhove; Laurent Bordes; Derek S. Young
We present an algorithm for estimating parameters in a mixture-of-regressions model in which the errors are assumed to be independent and identically distributed but no other assumption is made. This model is introduced as one of several recent generalizations of the standard fully parametric mixture of linear regressions in the literature. A sufficient condition for the identifiability of the parameters is stated and proved. Several different versions of the algorithm, including one that has a provable ascent property, are introduced. Numerical tests indicate the effectiveness of some of these algorithms.
Scandinavian Journal of Statistics | 2002
Didier Chauveau; Pierre Vandekerkhove
The Hastings-Metropolis algorithm is a general MCMC method for sampling from a density known up to a constant. Geometric convergence of this algorithm has been proved under conditions relative to the instrumental (or proposal) distribution. We present an inhomogeneous Hastings-Metropolis algorithm for which the proposal density approximates the target density, as the number of iterations increases. The proposal density at the nth step is a non-parametric estimate of the density of the algorithm, and uses an increasing number of i.i.d. copies of the Markov chain. The resulting algorithm converges (in n) geometrically faster than a Hastings-Metropolis algorithm with any fixed proposal distribution. The case of a strictly positive density with compact support is presented first, then an extension to more general densities is given. We conclude by proposing a practical way of implementation for the algorithm, and illustrate it over simulated examples.
Mathematical Methods of Statistics | 2010
Laurent Bordes; Pierre Vandekerkhove
In this paper we consider a two-component mixture model, one component of which has a known distribution while the other is only known to be symmetric. The mixture proportion is also an unknown parameter of the model. This mixture model class has proved to be useful to analyze gene expression data coming from microarray analysis. In this paper a general estimation method is proposed leading to a joint central limit result for all the estimators. Applications to basic testing problems related to this class of models are proposed, and the corresponding inference procedures are illustrated through some simulation studies.
Bernoulli | 2014
Gersende Fort; Eric Moulines; Pierre Priouret; Pierre Vandekerkhove
Adaptive and interacting Markov Chains Monte Carlo (MCMC) algorithms are a novel class of non-Markovian algorithms aimed at improving the simulation efficiency for complicated target distributions. In this paper, we study a general (non-Markovian) simulation framework covering both the adaptive and interacting MCMC algorithms. We establish a Central Limit Theorem for additive functionals of unbounded functions under a set of verifiable conditions, and identify the asymptotic variance. Our result extends all the results reported so far. An application to the interacting tempering algorithm (a simplified version of the equi-energy sampler) is presented to support our claims.
Communications in Statistics-theory and Methods | 2005
Laurent Bordes; Pierre Vandekerkhove
ABSTRACT In this article we introduce a new missing data model, based on a standard parametric Hidden Markov Model (HMM), for which information on the latent Markov chain is given since this one reaches a fixed state (and until it leaves this state). We study, under mild conditions, the consistency and asymptotic normality of the maximum likelihood estimator. We point out also that the underlying Markov chain does not need to be ergodic, and that identifiability of the model is not tractable in a simple way (unlike standard HMMs), but can be studied using various technical arguments.
Annals of Applied Probability | 2012
Pierre Tarrès; Pierre Vandekerkhove
A device has two arms with unknown deterministic payo! s, and the aim is to asymptotically identify the best one without spending too much time on the other. The Narendra algorithm o! ers a stochastic procedure to this end. We show under weak ergodic assumptions on these deterministic payo! s that the procedure eventually chooses the best arm (i.e. with greatest Cesaro limit) with probability one, for appropriate step sequences of the algorithm. In the case of i.i.d. payo! s, this implies a “quenched” version of the “annealed” result
Scandinavian Journal of Statistics | 2006
Laurent Bordes; Céline Delmas; Pierre Vandekerkhove
Bernoulli | 2005
Pierre Vandekerkhove