Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Judith Rousseau is active.

Publication


Featured researches published by Judith Rousseau.


Journal of the American Statistical Association | 2004

Optimal Sample Size for Multiple Testing: the Case of Gene Expression Microarrays

Peter Müller; Giovanni Parmigiani; Christian P. Robert; Judith Rousseau

We consider the choice of an optimal sample size for multiple-comparison problems. The motivating application is the choice of the number of microarray experiments to be carried out when learning about differential gene expression. However, the approach is valid in any application that involves multiple comparisons in a large number of hypothesis tests. We discuss two decision problems in the context of this setup: the sample size selection and the decision about the multiple comparisons. We adopt a decision-theoretic approach, using loss functions that combine the competing goals of discovering as many differentially expressed genes as possible, while keeping the number of false discoveries manageable. For consistency, we use the same loss function for both decisions. The decision rule that emerges for the multiple-comparison problem takes the exact form of the rules proposed in the recent literature to control the posterior expected falsediscovery rate. For the sample size selection, we combine the expected utility argument with an additional sensitivity analysis, reporting the conditional expected utilities and conditioning on assumed levels of the true differential expression. We recognize the resulting diagnostic as a form of statistical power facilitating interpretation and communication. As a sampling model for observed gene expression densities across genes and arrays, we use a variation of a hierarchical gamma/gamma model. But the discussion of the decision problem is independent of the chosen probability model. The approach is valid for any model that includes positive prior probabilities for the null hypotheses in the multiple comparisons and that allows for efficient marginal and posterior simulation, possibly by dependent Markov chain Monte Carlo simulation.


Nature Human Behaviour | 2018

Redefine Statistical Significance

Daniel J. Benjamin; James O. Berger; Magnus Johannesson; Brian A. Nosek; Eric-Jan Wagenmakers; Richard A. Berk; Kenneth A. Bollen; Björn Brembs; Lawrence D. Brown; Colin F. Camerer; David Cesarini; Christopher D. Chambers; Merlise A. Clyde; Thomas D. Cook; Paul De Boeck; Zoltan Dienes; Anna Dreber; Kenny Easwaran; Charles Efferson; Ernst Fehr; Fiona Fidler; Andy P. Field; Malcolm R. Forster; Edward I. George; Richard Gonzalez; Steven N. Goodman; Edwin J. Green; Donald P. Green; Anthony G. Greenwald; Jarrod D. Hadfield

We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.


Annals of Statistics | 2012

BERNSTEIN-VON MISES THEOREM FOR LINEAR FUNCTIONALS OF THE DENSITY

Vincent Rivoirard; Judith Rousseau

In this paper, we study the asymptotic posterior distribution of linear functionals of the density by deriving general conditions to obtain a semi-parametric version of the Bernstein–von Mises theorem. The special case of the cumulative distributive function, evaluated at a specific point, is widely considered. In particular, we show that for infinite-dimensional exponential families, under quite general assumptions, the asymptotic posterior distribution of the functional can be either Gaussian or a mixture of Gaussian distributions with different centering points. This illustrates the positive, but also the negative, phenomena that can occur in the study of Bernstein–von Mises results.


Bayesian Analysis | 2012

Combining Expert Opinions in Prior Elicitation

Isabelle Albert; Sophie Donnet; Chantal Guihenneuc-Jouyaux; Samantha Low-Choy; Kerrie Mengersen; Judith Rousseau

We consider the problem of combining opinions from different experts in an explicitly model-based way to construct a valid subjective prior in a Bayesian statistical approach. We propose a generic approach by considering a hierarchical model accounting for various sources of variation as well as accounting for potential dependence between experts. We apply this approach to two problems. The first problem deals with a food risk assessment problem involving modelling dose-response for Listeria monocytogenes contamination of mice. Two hierarchical levels of variation are considered (between and within experts) with a complex mathematical situation due to the use of an indirect probit regression. The second concerns the time taken by PhD students to submit their thesis in a particular school. It illustrates a complex situation where three hierarchical levels of variation are modelled but with a simpler underlying probability distribution (log-Normal).


Electronic Journal of Statistics | 2010

Adaptive Bayesian density estimation with location-scale mixtures

Willem Kruijer; Judith Rousseau; Aad van der Vaart

We study convergence rates of Bayesian density estimators based on finite location-scale mixtures of exponential power distributions. We construct approximations of �-Holder densities be continuous mix- tures of exponential power distributions, leading to approximations of the �-Holder densities by finite mixtures. These results are then used to derive posterior concentration rates, with priors based on these mixture models. The rates are minimax (up to a logn term) and since the priors are inde- pendent of the smoothness the rates are adaptive to the smoothness.


Annals of Statistics | 2010

RATES OF CONVERGENCE FOR THE POSTERIOR DISTRIBUTIONS OF MIXTURES OF BETAS AND ADAPTIVE NONPARAMETRIC ESTIMATION OF THE DENSITY

Judith Rousseau

In this work we investigate the asymptotic properties of nonparametric bayesian mixtures of Betas for estimating a smooth density on [0,1]. We consider a parameterisation of Betas distributions in terms of mean and scale parameters and construct a mixture of these Betas in the mean parameter, while putting a prior on this scaling parameter. We prove that such Bayesian nonparametric models have good frequentist asymptotic properties. We determine the posterior rate of concentration around the true density and prove that it is the minimax rate of concentration when the true density belongs to a Holder class with regularity β, for all positive β, leading to a minimax adaptive estimating procedure of the density. We show that Bayesian kernel estimation is more flexible than the usual frequentist kernel estimation allowing for adaptive rates of convergence, using a simple trick which can be used in many other types of kernel Bayesian approaches.


Annals of Statistics | 2015

ON ADAPTIVE POSTERIOR CONCENTRATION RATES

Marc Hoffmann; Judith Rousseau; Johannes Schmidt-Hieber

tion of Holder balls and that moreover achieve our lower bound. We analyse the consequences in terms of asymptotic behaviour of poste- rior credible balls as well as frequentist minimax adaptive estimation. Our results are appended with an upper bound for the contraction rate under an arbitrary loss in a generic regular experiment. The up- per bound is attained for certain sieve priors and enables to extend our results to density estimation.


Annals of Statistics | 2015

A Bernstein-von Mises theorem for smooth functionals in semiparametric models

Ismaël Castillo; Judith Rousseau

A Bernstein–von Mises theorem is derived for general semiparametric functionals. The result is applied to a variety of semiparametric problems in i.i.d. and non-i.i.d. situations. In particular, new tools are developed to handle semiparametric bias, in particular for nonlinear functionals and in cases where regularity is possibly low. Examples include the squared L2-norm in Gaussian white noise, nonlinear functionals in density estimation, as well as functionals in autoregressive models. For density estimation, a systematic study of BvM results for two important classes of priors is provided, namely random histograms and Gaussian process priors.


Scandinavian Journal of Statistics | 2013

Bayesian Optimal Adaptive Estimation Using a Sieve Prior.

Julyan Arbel; Ghislaine Gayraud; Judith Rousseau

We derive rates of contraction of posterior distributions on nonparametric models resulting from sieve priors. The aim of the paper is to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter space is, e.g., a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l2 loss is strongly suboptimal and we provide a lower bound on the rate.


Bayesian Analysis | 2012

Posterior Concentration Rates for Infinite Dimensional Exponential Families

Vincent Rivoirard; Judith Rousseau

In this paper we derive adaptive non-parametric rates of concentration of the posterior distributions for the density model on the class of Sobolev and Besov spaces. For this purpose, we build prior models based on wavelet or Fourier expansions of the logarithm of the density. The prior models are not necessarily Gaussian

Collaboration


Dive into the Judith Rousseau's collaboration.

Top Co-Authors

Avatar

Kerrie Mengersen

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophie Donnet

Centro de Investigación en Matemáticas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ross McVinish

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Offer Lieberman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge