Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stéphane Chrétien is active.

Publication


Featured researches published by Stéphane Chrétien.


Journal of Computational and Graphical Statistics | 2001

A Component-Wise EM Algorithm for Mixtures

Gilles Celeux; Stéphane Chrétien; Florence Forbes; Abdallah Mkhadri

Maximum likelihood estimation in finite mixture distributions is typically approached as an incomplete data problem to allow application of the expectation-maximization (EM) algorithm. In its general formulation, the EM algorithm involves the notion of a complete data space, in which the observed measurements and incomplete data are embedded. An advantage is that many difficult estimation problems are facilitated when viewed in this way. One drawback is that the simultaneous update used by standard EM requires overly informative complete data spaces, which leads to slow convergence in some situations. In the incomplete data context, it has been shown that the use of less informative complete data spaces, or equivalently smaller missing data spaces, can lead to faster convergence without sacrifying simplicity. However, in the mixture case, little progress has been made in speeding up EM. In this article we propose a component-wise EM for mixtures. It uses, at each iteration, the smallest admissible missing data space by intrinsically decoupling the parameter updates. Monotonicity is maintained, although the estimated proportions may not sum to one during the course of the iteration. However, we prove that the mixing proportions will satisfy this constraint upon convergence. Our proof of convergence relies on the interpretation of our procedure as a proximal point algorithm. For performance comparison, we consider standard EM as well as two other algorithms based on missing data space reduction, namely the SAGE and AECME algorithms. We provide adaptations of these general procedures to the mixture case. We also consider the ECME algorithm, which is not a data augmentation scheme but still aims at accelerating EM. Our numerical experiments illustrate the advantages of the component-wise EM algorithm relative to these other methods.


IEEE Signal Processing Letters | 2010

An Alternating

Stéphane Chrétien

Compressed sensing is a new methodology for constructing sensors which allow sparse signals to be efficiently recovered using only a small number of observations. The recovery problem can often be stated as the one of finding the solution of an underdetermined system of linear equations with the smallest possible support. The most studied relaxation of this hard combinatorial problem is the l 1-relaxation consisting of searching for solutions with smallest l 1-norm. In this short note, based on the ideas of Lagrangian duality, we introduce an alternating l 1 relaxation for the recovery problem enjoying higher recovery rates in practice than the plain l 1 relaxation and some recent improvements of this relaxation.


IEEE Transactions on Information Theory | 2000

l_1

Stéphane Chrétien; Alfred O. Hero

Accelerated algorithms for maximum-likelihood image reconstruction are essential for emerging applications such as three-dimensional (3-D) tomography, dynamic tomographic imaging, and other high-dimensional inverse problems. In this paper, we introduce and analyze a class of fast and stable sequential optimization methods for computing maximum-likelihood estimates and study its convergence properties. These methods are based on a proximal point algorithm implemented with the Kullback-Liebler (KL) divergence between posterior densities of the complete data as a proximal penalty function. When the proximal relaxation parameter is set to unity, one obtains the classical expectation-maximization (EM) algorithm. For a decreasing sequence of relaxation parameters, relaxed versions of EM are obtained which can have much faster asymptotic convergence without sacrifice of monotonicity. We present an implementation of the algorithm using Mores (1983) trust region update strategy. For illustration, the method is applied to a nonquadratic inverse problem with Poisson distributed data.


Statistics & Probability Letters | 2003

Approach to the Compressed Sensing Problem

Christophe Biernacki; Stéphane Chrétien

As is well known, the likelihood in the Gaussian mixture is unbounded for any parameters such that a Dirac is placed at any observed sample point. The behavior of the EM algorithm near a degenerated solution is studied. It is established that there exists a domain of attraction around degeneracy and that convergence to these particular solutions is extremely fast. It confirms what many practitioners already noted in their experiments. Some available proposals to avoid degenerating are discussed but the presented convergence results make it possible to defend the pragmatic approach to the degeneracy problem in EM which consists in random restarts.


IEEE Transactions on Information Theory | 2014

Kullback proximal algorithms for maximum-likelihood estimation

Stéphane Chrétien; Sébastien Darses

We address the issue of estimating the regression vector β in the generic s-sparse linear model y = Xβ + z, with β ∈ ℝ<sup>p</sup>, y ∈ ℝ<sup>n</sup>, z ~ )V (0, σ<sup>2</sup>I), and p > n when the variance σ2 is unknown. We study two least absolute shrinkage and selection operator (LASSO)-type methods that jointly estimate β and the variance. These estimators are minimizers of the l1 penalized least-squares functional, where the relaxation parameter is tuned according to two different strategies. In the first strategy, the relaxation parameter is of the order σ̂√log p, where σ̂<sup>2</sup> is the empirical variance. In the second strategy, the relaxation parameter is chosen so as to enforce a tradeoff between the fidelity and the penalty terms at optimality. For both estimators, our assumptions are similar to the ones proposed by Candès and Plan in Ann. Stat. (2009), for the case where σ<sup>2</sup> is known. We prove that our estimators ensure exact recovery of the support and sign pattern of β with high probability. We present simulation results showing that the first estimator enjoys nearly the same performances in practice as the standard LASSO (known variance case) for a wide range of the signal-to-noise ratio. Our second estimator is shown to outperform both in terms of false detection, when the signal-to-noise ratio is low.


Computational Statistics & Data Analysis | 2010

Degeneracy in the maximum likelihood estimation of univariate Gaussian mixtures with EM

David Pleydell; Stéphane Chrétien

A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work.


ieee conference on prognostics and health management | 2015

Sparse Recovery With Unknown Variance: A LASSO-Type Approach

Stéphane Chrétien; Nathalie Herr; Jean-Marc Nicod; Christophe Varnier

The use of fuel cells appears to be of growing interest as a potential alternative to conventional power systems. Fuel cell systems suffer however from insufficient durability and their lifetime may be improved. Prognostics results in the form of Remaining Useful Life are proposed to be used in a Prognostics and Health Management (PHM) framework to maximize the global useful life of a multi-stack fuel cell system under service constraint. Convex optimization is used to define the contribution of each stack to a global needed power output. A Mirror-Prox for Saddle Points method is proposed to cope with the assignment problem. Resolution method is detailed and promising simulation results are provided.


IEEE Transactions on Information Theory | 2017

Mixtures of GAMs for habitat suitability analysis with overdispersed presence/absence data

Stéphane Chrétien; Tianwen Wei

Sparse recovery from linear Gaussian measurements has been the subject of much investigation since the breaktrough papers Candès et al. and Donoho on compressed sensing. Application to sparse vectors and sparse matrices via least squares penalized with sparsity promoting norms is now well understood thanks to tools, such as Gaussian mean width, statistical dimension, and the notion of descent cones. Extention of these ideas to low rank tensor recovery is starting to enjoy considerable interest due to its many potential applications to independent component analysis, hidden Markov models, Gaussian mixture models, and hyperspectral image analysis, to name a few. In this paper, we demonstrate that the recent approach of Vershynin provides useful error bounds in the tensor setting with the nuclear norm or the Romera-Paredes–Pontil penalization.


Archive | 2002

A post-prognostics decision approach to optimize the commitment of fuel cell systems in stationary applications

Stéphane Chrétien; Franck Corset

The goal of this paper is to present an application to binary image least-squares estimation of some recent results in the semi-definite programming approximation theory of some combinatorial problems due to Goemans and Williamson, Yu. Nesterov and others. In particular, we show in a very simple fashion that a good suboptimal solution may be obtained via eigenvalue optimization.


international conference on parallel processing | 2012

Sensing Tensors With Gaussian Filters

Stéphane Chrétien; Jean-Marc Nicod; Laurent Philippe; Veronika Rehn-Sonigo; Lamiel Toch

In this paper we tackle the well-known problem of scheduling a collection of parallel jobs on a set of processors either in a cluster or in a multiprocessor computer. For the makespan objective, i.e., the completion time of the last job, this problem has been shown to be NP-Hard and several heuristics have already been proposed to minimize the execution time. We introduce a novel approach based on successive linear programming (LP) approximations of a sparse model. The idea is to relax an integer linear program and use lp norm-based operators to force the solver to find almost-integer solutions that can be assimilated to an integer solution. We consider the case where jobs are either rigid or moldable. A rigid parallel job is performed with a predefined number of processors while a moldable job can define the number of processors that it is using just before it starts its execution. We compare the scheduling approach with the classic Largest Task First list based algorithm and we show that our approach provides good results for small instances of the problem. The contributions of this paper are both the integration of mathematical methods in the scheduling world and the design of a promising approach which gives good results for scheduling problems with less than a hundred processors.

Collaboration


Dive into the Stéphane Chrétien's collaboration.

Top Co-Authors

Avatar

Christophe Guyeux

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tianwen Wei

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastien Darses

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Franck Corset

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge