Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elisabeth Gassiat is active.

Publication


Featured researches published by Elisabeth Gassiat.


international conference on acoustics, speech, and signal processing | 1997

Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models

Eric Moulines; Jean-Fran cois Cardoso; Elisabeth Gassiat

An approximate maximum likelihood method for blind source separation and deconvolution of noisy signal is proposed. This technique relies upon a data augmentation scheme, where the (unobserved) input are viewed as the missing data. In the technique described, the input signal distribution is modeled by a mixture of Gaussian distributions, enabling the use of explicit formula for computing the posterior density and conditional expectation and thus avoiding Monte-Carlo integrations. Because this technique is able to capture some salient features of the input signal distribution, it performs generally much better than third-order or fourth-order cumulant based techniques.


Bernoulli | 1997

The estimation of the order of a mixture model

Didier Dacunha-Castelle; Elisabeth Gassiat

We propose a new method to estimate the number of different populations when a large sample of a mixture of these populations is observed. It is possible to define the number of different populations as the number of points in the support of the mixing distribution. For discrete distributions having a finite support, the number of support points can be characterized by Hankel matrices of the first algebraic moments, or Toeplitz matrices of the trigonometric moments. Namely, for one-dimensional distributions, the cardinality of the support may be proved to be the least integer such that the Hankel matrix (or the Toeplitz matrix) degenerates. Our estimator is based on this property. We first prove the convergence of the estimator, and then its exponential convergence under wide assumptions. The number of populations is not a priori bounded. Our method applies to a large number of models such as translation mixtures with known or unknown variance, scale mixtures, exponential families and various multivariate models. The method has an obvious computational advantage since it avoids any computation of estimates of the mixing parameters. Finally we give some numerical examples to illustrate the effectiveness of the method in the most popular cases.


IEEE Transactions on Information Theory | 2003

Optimal error exponents in hidden Markov models order estimation

Elisabeth Gassiat; Stéphane Boucheron

We consider the estimation of the number of hidden states (the order) of a discrete-time finite-alphabet hidden Markov model (HMM). The estimators we investigate are related to code-based order estimators: penalized maximum-likelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan (1994). We prove strong consistency of those estimators without assuming any a priori upper bound on the order and smaller penalties than previous works. We prove a version of Steins lemma for HMM order estimation and derive an upper bound on underestimation exponents. Then we prove that this upper bound can be achieved by the penalized ML estimator and by the penalized mixture estimator. The proof of the latter result gets around the elusive nature of the ML in HMM by resorting to large-deviation techniques for empirical processes. Finally, we prove that for any consistent HMM order estimator, for most HMM, the overestimation exponent is null.


IEEE Transactions on Signal Processing | 1997

Source separation when the input sources are discrete or have constant modulus

Fabrice Gamboa; Elisabeth Gassiat

In this paper, we present a new method for the source separation problem when some prior information on the input sources is available. More specifically, we study the situation where the distributions of the input signals are discrete or are concentrated on a circle. The method is based on easy properties of Hankel forms and on the divisibility of Gaussian distributions. In both situations, we prove that the estimator converges in absence of noise or if we know the first moments of the noise up to its scale. Moreover, in the absence of noise, the estimate converges with a finite number of observations.


Electronic Journal of Statistics | 2009

A Bernstein-Von Mises Theorem for discrete probability distributions

Stéphane Boucheron; Elisabeth Gassiat

We investigate the asymptotic normality of the posterior distri- bution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function �0 on \{0} and a sequence of truncation levels (kn)n satisfying k 3 nninfikn �0(i). Let ˆ � denote the maximum likelihood estimate of (�0(i))ikn and letn(�0) denote the kn- dimensional vector which i-th coordinate is defined by p n(ˆ �n(i) �0(i)) for 1 � ikn. We check that under mild conditions on �0 and on the sequence of prior probabilities on the kn-dimensional simplices, after centering and rescaling, the variation distance between the posterior distribution recen- tered around ˆ �n and rescaled by p n and the kn-dimensional Gaussian dis- tribution N(�n(�0),I 1 (�0)) converges in probability to 0. This theorem can be used to prove the asymptotic normality of Bayesian estimators of Shannon and Renyi entropies. The proofs are based on concentration inequalities for centered and non- centered Chi-square (Pearson) statistics. The latter allow to establish pos- terior concentration rates with respect to Fisher distance rather than with respect to the Hellinger distance as it is commonplace in non-parametric Bayesian statistics.


international symposium on information theory | 1998

MEM pixel correlated solutions for generalized moment and interpolation problems

Imre Csiszar; Fabrice Gamboa; Elisabeth Gassiat

In generalized moment problems (signed) measures are searched to fit given observations, or continuous functions are searched to fit given constraints. Known convex methods for solving such problems, and their stochastic interpretations via maximum entropy on the mean (MEM) and in a Bayesian sense are reviewed, with some improvements on previous results. Then the MEM and Bayesian approaches are extended to default models with a dependence structure, yielding new families of solutions. One family involves a transfer kernel, and allows using prior information such as modality, convexity, or Sobolev norms. Another family of solutions with possibly nonconvex criteria, is arrived at using default models with exchangeable random variables. The main technical tools are convex analysis and large deviations theory.


IEEE Transactions on Information Theory | 2009

Coding on Countably Infinite Alphabets

Stéphane Boucheron; Aurélien Garivier; Elisabeth Gassiat

This paper describes universal lossless coding strategies for compressing sources on countably infinite alphabets. Classes of memoryless sources defined by an envelope condition on the marginal distribution provide benchmarks for coding techniques originating from the theory of universal coding over finite alphabets. We prove general upper bounds on minimax regret and lower bounds on minimax redundancy for such source classes. The general upper bounds emphasize the role of the normalized maximum likelihood (NML) codes with respect to minimax regret in the infinite alphabet context. Lower bounds are derived by tailoring sharp bounds on the redundancy of Krichevsky-Trofimov coders for sources over finite alphabets. Up to logarithmic (resp., constant) factors the bounds are matching for source classes defined by algebraically declining (resp., exponentially vanishing) envelopes. Effective and (almost) adaptive coding techniques are described for the collection of source classes defined by algebraically vanishing envelopes. Those results extend our knowledge concerning universal coding to contexts where the key tools from parametric inference are known to fail.


IEEE Transactions on Information Theory | 1992

On simultaneous signal estimation and parameter identification using a generalized likelihood approach

Elisabeth Gassiat; Fabrice Monfront; Yves Goussard

A common approach to blind deconvolution of Bernoulli-Gaussian processes consists of performing both signal restoration and hyperparameter identification through maximization of a single generalized likelihood criterion. It is shown on a simple example that the resulting hyperparameter estimates may not converge toward any meaningful value. Therefore, other more reliable approaches should be adopted whenever possible. >


Statistics and Computing | 2016

Inference in finite state space non parametric Hidden Markov Models and applications

Elisabeth Gassiat; Alice Cleynen; Stéphane Robin

Hidden Markov models (HMMs) are intensively used in various fields to model and classify data observed along a line (e.g. time). The fit of such models strongly relies on the choice of emission distributions that are most often chosen among some parametric family. In this paper, we prove that finite state space non parametric HMMs are identifiable as soon as the transition matrix of the latent Markov chain has full rank and the emission probability distributions are linearly independent. This general result allows the use of semi- or non-parametric emission distributions. Based on this result we present a series of classification problems that can be tackled out of the strict parametric framework. We derive the corresponding inference algorithms. We also illustrate their use on few biological examples, showing that they may improve the classification performances.


Bernoulli | 2014

About the posterior distribution in hidden Markov models with unknown number of states

Elisabeth Gassiat; Judith Rousseau

We consider finite state space stationary hidden Markov models (HMMs) in the situation where the number of hidden states is unknown. We provide a frequentist asymptotic evaluation of Bayesian analysis methods. Our main result gives posterior concentration rates for the marginal densities, that is for the density of a fixed number of consecutive observations. Using conditions on the prior, we are then able to define a consistent Bayesian estimator of the number of hidden states. It is known that the likelihood ratio test statistic for overfitted HMMs has a nonstandard behaviour and is unbounded. Our conditions on the prior may be seen as a way to penalize parameters to avoid this phenomenon. Inference of parameters is a much more difficult task than inference of marginal densities, we still provide a precise description of the situation when the observations are i.i.d. and we allow for 2 possible hidden states.

Collaboration


Dive into the Elisabeth Gassiat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabrice Gamboa

Institut de Mathématiques de Toulouse

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith Rousseau

Paris Dauphine University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge