Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gersende Fort is active.

Publication


Featured researches published by Gersende Fort.


Bioinformatics | 2005

Classification using partial least squares with penalized logistic regression

Gersende Fort; Sophie Lambert-Lacroix

MOTIVATION One important aspect of data-mining of microarray data is to discover the molecular variation among cancers. In microarray studies, the number n of samples is relatively small compared to the number p of genes per sample (usually in thousands). It is known that standard statistical methods in classification are efficient (i.e. in the present case, yield successful classifiers) particularly when n is (far) larger than p. This naturally calls for the use of a dimension reduction procedure together with the classification one. RESULTS In this paper, the question of classification in such a high-dimensional setting is addressed. We view the classification problem as a regression one with few observations and many predictor variables. We propose a new method combining partial least squares (PLS) and Ridge penalized logistic regression. We review the existing methods based on PLS and/or penalized likelihood techniques, outline their interest in some cases and theoretically explain their sometimes poor behavior. Our procedure is compared with these other classifiers. The predictive performance of the resulting classification rule is illustrated on three data sets: Leukemia, Colon and Prostate.


Annals of Statistics | 2011

Convergence of adaptive and interacting Markov chain Monte Carlo algorithms

Gersende Fort; Eric Moulines; Pierre Priouret

Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been recently introduced in the literature. These novel simulation algorithms are designed to increase the simulation efficiency to sample complex distributions. Motivated by some recently introduced algorithms (such as the adaptive Metropolis algorithm and the interacting tempering algorithm), we develop a general methodological and theoretical framework to establish both the convergence of the marginal distribution and a strong law of large numbers. This framework weakens the conditions introduced in the pioneering paper by Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458--475]. It also covers the case when the target distribution


Physical Review D | 2009

Estimation of cosmological parameters using adaptive importance sampling

Darren Wraith; Martin Kilbinger; K. Benabed; Olivier Cappé; Jean-François Cardoso; Gersende Fort; S. Prunet; Christian P. Robert

\pi


IEEE Transactions on Information Theory | 2013

Performance of a Distributed Stochastic Approximation Algorithm

Pascal Bianchi; Gersende Fort; Walid Hachem

is sampled by using Markov transition kernels with a stationary distribution that differs from


Archive | 2011

Bayesian Time Series Models: Adaptive Markov chain Monte Carlo: theory and methods

Yves F. Atchadé; Gersende Fort; Eric Moulines; Pierre Priouret

\pi


Bernoulli | 2010

Limit theorems for some adaptive MCMC algorithms with subgeometric kernels

Yves F. Atchadé; Gersende Fort

.


international conference on acoustics, speech, and signal processing | 2011

Convergence of a distributed parameter estimator for sensor networks with local averaging of the estimates

Pascal Bianchi; Gersende Fort; Walid Hachem; Jérémie Jakubowicz

We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.


Electronic Journal of Statistics | 2013

Online Expectation Maximization based algorithms for inference in Hidden Markov Models

Sylvain Le Corff; Gersende Fort

In this paper, a distributed stochastic approximation algorithm is studied. Applications of such algorithms include decentralized estimation, optimization, control or computing. The algorithm consists in two steps: a local step, where each node in a network updates a local estimate using a stochastic approximation algorithm with decreasing step size, and a gossip step, where a node computes a local weighted average between its estimates and those of its neighbors. Convergence of the estimates toward a consensus is established under weak assumptions. The approach relies on two main ingredients: the existence of a Lyapunov function for the mean field in the agreement subspace, and a contraction property of the random matrices of weights in the subspace orthogonal to the agreement subspace. A second-order analysis of the algorithm is also performed under the form of a central limit Theorem. The Polyak-averaged version of the algorithm is also considered.


Annals of Operations Research | 2011

On adaptive stratification

Pierre Etore; Gersende Fort; Benjamin Jourdain; Eric Moulines

In general, the transition probability P of the Markov chain depends on some tuning parameter θ defined on some space Θ which can be either finite dimensional or infinite dimensional. The success of the MCMC procedure depends crucially upon a proper choice of θ. To illustrate, consider the standard Metropolis-Hastings (MH) algorithm. For simplicity, we assume that π has a density also denoted by π with respect to the Lebesgue measure on X = R endowed with its Borel σ-field X . Given that the chain is at x, a candidate y is sampled from a proposal transition density q(x, ·) and is accepted with probability α(x, y) defined as


IEEE Journal of Selected Topics in Signal Processing | 2016

A Shrinkage-Thresholding Metropolis Adjusted Langevin Algorithm for Bayesian Variable Selection

Amandine Schreck; Gersende Fort; Sylvain Le Corff; Eric Moulines

This paper deals with the ergodicity and the existence of a strong law of large numbers for adaptive Markov Chain Monte Carlo. We show that a diminishing adaptation assumption together with a drift condition for positive recurrence is enough to imply ergodicity. Strengthening the drift condition to a polynomial drift condition yields a strong law of large numbers for possibly unbounded functions. These results broaden considerably the class of adaptive MCMC algorithms for which rigorous analysis is now possible. As an example, we give a detailed analysis of the Adaptive Metropolis Algorithm of Haario et al. (2001) when the target distribution is sub-exponential in the tails.

Collaboration


Dive into the Gersende Fort's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge