François Perron
Université de Montréal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by François Perron.
Annals of Applied Probability | 2004
Carlos A. Leon; François Perron
We build optimal exponential bounds for the probabilities of large deviations of sums \sum_{k=1}^nf(X_k) where (X_k) is a finite reversible Markov chain and f is an arbitrary bounded function. These bounds depend only on the stationary mean E_{\pi}f, the end-points of the support of f, the sample size n and the second largest eigenvalue \lambda of the transition matrix.
Journal of Multivariate Analysis | 1992
François Perron
Let S: p - p have a nonsingular Wishart distribution with unknown matrix [Sigma] and n degrees of freedom, n >= p. For estimating [Sigma], a family of minimax estimators, with respect to the entropy loss, is presented. These estimators are of the form (S) = R[Phi](L) Rt, where R is orthogonal, L and [Phi] are diagonal, and RLRt = S. Conditions under which the components of [Phi] and L follow the same order relation are stated (i.e., writing L = diag((l1, ..., lp)t) and [Phi] = diag(([phi]1, ..., [phi]p)t) it is true that [phi]1 >= ... >= [phi]p if and only if l1 >= ... >= lp). Simulation results are included.
Statistics and Computing | 2008
Bo Cai; Renate Meyer; François Perron
Different strategies have been proposed to improve mixing and convergence properties of Markov Chain Monte Carlo algorithms. These are mainly concerned with customizing the proposal density in the Metropolis–Hastings algorithm to the specific target density and require a detailed exploratory analysis of the stationary distribution and/or some preliminary experiments to determine an efficient proposal. Various Metropolis–Hastings algorithms have been suggested that make use of previously sampled states in defining an adaptive proposal density. Here we propose a general class of adaptive Metropolis–Hastings algorithms based on Metropolis–Hastings-within-Gibbs sampling. For the case of a one-dimensional target distribution, we present two novel algorithms using mixtures of triangular and trapezoidal densities. These can also be seen as improved versions of the all-purpose adaptive rejection Metropolis sampling (ARMS) algorithm to sample from non-logconcave univariate densities. Using various different examples, we demonstrate their properties and efficiencies and point out their advantages over ARMS and other adaptive alternatives such as the Normal Kernel Coupler.
Computational Statistics & Data Analysis | 2008
Renate Meyer; Bo Cai; François Perron
A crucial problem in Bayesian posterior computation is efficient sampling from a univariate distribution, e.g. a full conditional distribution in applications of the Gibbs sampler. This full conditional distribution is usually non-conjugate, algebraically complex and computationally expensive to evaluate. We propose an alternative algorithm, called ARMS2, to the widely used adaptive rejection sampling technique ARS [Gilks, W.R., Wild, P., 1992. Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41 (2), 337-348; Gilks, W.R., 1992. Derivative-free adaptive rejection sampling for Gibbs sampling. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statistics, Vol. 4. Clarendon, Oxford, pp. 641-649] for generating a sample from univariate log-concave densities. Whereas ARS is based on sampling from piecewise exponentials, the new algorithm uses truncated normal distributions and makes use of a clever auxiliary variable technique [Damien, P., Walker, S.G., 2001. Sampling truncated normal, beta, and gamma densities. Journal of Computational and Graphical Statistics 10 (2) 206-215]. Furthermore, we extend this algorithm to deal with non-log-concave densities to provide an enhanced alternative to adaptive rejection Metropolis sampling, ARMS [Gilks, W.R., Best, N.G., Tan, K.K.C., 1995. Adaptive rejection Metropolis sampling within Gibbs sampling. Applied Statistics 44, 455-472]. The performance of ARMS and ARMS2 is compared in simulations of standard univariate distributions as well as in Gibbs sampling of a Bayesian hierarchical state-space model used for fisheries stock assessment.
Journal of Statistical Planning and Inference | 2004
François Perron; Bimal K. Sinha
Abstract In this paper we examine the problem of the estimation of the variance σ 2 of a population based on a ranked set sample (RSS) from a nonparametric point of view. It is well known that based on a single cycle RSS, there does not exist an unbiased estimate of σ 2 . We show that for more than one cycle, it is possible to construct a class of quadratic unbiased estimates of σ 2 in both balanced and unbalanced cases. Moreover, a minimum variance unbiased quadratic nonnegative estimate of σ 2 within a certain class of quadratic estimates is derived.
Statistics & Probability Letters | 2003
Carlos A. León; François Perron
We build optimal exponential bounds for the probabilities of large deviations of sums Sn=[summation operator]1n Xi of independent Bernoulli random variables from their mean n[mu]. These bounds depend only on the sample size n. Our results improve previous results obtained by Hoeffding and, more recently, by Talagrand. We also prove a global stochastic order dominance for the Binomial law and shows how this gives a new explanation of Hoeffdings results.
Journal of Statistical Planning and Inference | 1999
Dayong Li; Bimal K. Sinha; François Perron
Abstract The notion of ranked set sampling (RSS) for estimating the mean of a population and its advantage over the use of a simple random sampling for the same purpose are well known.In this paper we provide a new perspective of RSS via the notion of random selection, and discuss some special features of this improved procedure.
Statistics & Probability Letters | 2002
Éric Marchand; François Perron
For estimating under squared-error loss the mean of a p-variate normal distribution when this mean lies in a ball of radius m centered at the origin and the covariance matrix is equal to the identity matrix, it is shown that the Bayes estimator with respect to a uniformly distributed prior on the boundary of the parameter space ([delta]BU) is minimax whenever . Further descriptions of the cutoff points of small enough radiuses (i.e., m[less-than-or-equals, slant]m0(p)) for [delta]BU to be minimax are given. These include lower bounds and the large dimension p limiting behaviour of . Finally, implications for the associated minimax risk are described.
Canadian Journal of Statistics-revue Canadienne De Statistique | 1990
François Perron
Given a Wishart matrix S [S ∽ Wp(n, Σ)] and an independent multinomial vector X [X ∽ Np (μ, Σ)], equivariant estimators of Σ are proposed. These estimators dominate the best multiple of S and the Stein-type truncated estimators.
Statistics | 2007
Yves F. Atchadé; François Perron
Under a compactness assumption, we show that a φ-irreducible and aperiodic Metropolis-Hastings chain is geometrically ergodic if and only if its rejection probability is bounded away from unity. In the particular case of the independence Metropolis-Hastings algorithm, we obtain that the whole spectrum of the induced operator is contained in (and in many cases equal to) the essential range of the rejection probability of the chain as conjectured by [Liu, J.S., 1996, Metropolized independent sampling with comparaisons to rejection sampling and importance sampling. Statistics and Computing, 6, 113–119.].