John T. Ormerod
University of Sydney
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John T. Ormerod.
The American Statistician | 2010
John T. Ormerod; M. P. Wand
Variational approximations facilitate approximate inference for the parameters in complex statistical models and provide fast, deterministic alternatives to Monte Carlo methods. However, much of the contemporary literature on variational approximations is in Computer Science rather than Statistics, and uses terminology, notation, and examples from the former field. In this article we explain variational approximation in statistical terms. In particular, we illustrate the ideas of variational approximation using examples that are familiar to statisticians.
Bayesian Analysis | 2011
M. P. Wand; John T. Ormerod; Simone A. Padoan; Rudolf Frühwirth
We develop strategies for mean eld variational Bayes approximate inference for Bayesian hierarchical models containing elaborate distributions. We loosely dene elaborate distributions to be those having more complicated forms compared with common distributions such as those in the Normal and Gamma families. Examples are Asymmetric Laplace, Skew Normal and Generalized Ex- treme Value distributions. Such models suer from the diculty that the param- eter updates do not admit closed form solutions. We circumvent this problem through a combination of (a) specially tailored auxiliary variables, (b) univariate quadrature schemes and (c) nite mixture approximations of troublesome den-
Journal of Computational and Graphical Statistics | 2012
John T. Ormerod; M. P. Wand
Variational approximation methods have become a mainstay of contemporary machine learning methodology, but currently have little presence in statistics. We devise an effective variational approximation strategy for fitting generalized linear mixed models (GLMMs) appropriate for grouped data. It involves Gaussian approximation to the distributions of random effects vectors, conditional on the responses. We show that Gaussian variational approximation is a relatively simple and natural alternative to Laplace approximation for fast, non-Monte Carlo, GLMM analysis. Numerical studies show Gaussian variational approximation to be very accurate in grouped data GLMM contexts. Finally, we point to some recent theory on consistency of Gaussian variational approximation in this context. Supplemental materials are available online.
Journal of the American Statistical Association | 2011
Christel Faes; John T. Ormerod; M. P. Wand
Bayesian hierarchical models are attractive structures for conducting regression analyses when the data are subject to missingness. However, the requisite probability calculus is challenging and Monte Carlo methods typically are employed. We develop an alternative approach based on deterministic variational Bayes approximations. Both parametric and nonparametric regression are considered. Attention is restricted to the more challenging case of missing predictor data. We demonstrate that variational Bayes can achieve good accuracy, but with considerably less computational overhead. The main ramification is fast approximate Bayesian inference in parametric and nonparametric regression models with missing data. Supplemental materials accompany the online version of this article.
Computational Statistics & Data Analysis | 2013
Tung H. Pham; John T. Ormerod; M. P. Wand
A fast mean field variational Bayes (MFVB) approach to nonparametric regression when the predictors are subject to classical measurement error is investigated. It is shown that the use of such technology to the measurement error setting achieves reasonable accuracy. In tandem with the methodological development, a customized Markov chain Monte Carlo method is developed to facilitate the evaluation of accuracy of the MFVB method.
Electronic Journal of Statistics | 2014
Sarah E. Neville; John T. Ormerod; M. P. Wand
We investigate mean field variational approximate Bayesian in- ference for models that use continuous distributions, Horseshoe, Negative- Exponential-Gamma and Generalized Double Pareto, for sparse signal shrin- kage. Our principal finding is that the most natural, and simplest, mean field variational Bayes algorithm can perform quite poorly due to poste- rior dependence among auxiliary variables. More sophisticated algorithms, based on special functions, are shown to be superior. Continued fraction ap- proximations via Lentzs Algorithm are developed to make the algorithms
Optimization Methods & Software | 2006
V. Jeyakumar; John T. Ormerod; Robert S. Womersley
In this paper, we present knowledge-based support vector machine (SVM) classifiers using semidefinite linear programming. SVMs are an optimization-based solution method for large-scale data classification problems. Knowledge-based SVM classifiers, where prior knowledge is in the form of ellipsoidal constraints, result in a semidefinite linear programme with a set containment constraint. These problems are reformulated as standard semidefinite linear programming problems by the application of a dual characterization of the set containment under a mild regularity condition. The reformulated semidefinite linear programme is solved by the publicly available solvers. Computational results show that prior knowledge can often improve correctness of the classifier.
Computational Statistics & Data Analysis | 2011
John T. Ormerod
Variational methods for approximate Bayesian inference provide fast, flexible, deterministic alternatives to Monte Carlo methods. Unfortunately, unlike Monte Carlo methods, variational approximations cannot, in general, be made to be arbitrarily accurate. This paper develops grid-based variational approximations which endeavor to approximate marginal posterior densities in a spirit similar to the Integrated Nested Laplace Approximation (INLA) of Rue et al. (2009) but which may be applied in situations where INLA cannot be used. The method can greatly increase the accuracy of a base variational approximation, although not in general to arbitrary accuracy. The methodology developed is at least reasonably accurate on all of the examples considered in the paper.
Computational Statistics & Data Analysis | 2014
Jan Luts; John T. Ormerod
A mean field variational Bayes approach to support vector machines (SVMs) using the latent variable representation on Polson and Scott (2012) is presented. This representation allows circumvention of many of the shortcomings associated with classical SVMs including automatic penalty parameter selection, the ability to handle dependent samples, missing data and variable selection. We demonstrate on simulated and real datasets that our approach is easily extendable to non-standard situations and outperforms the classical SVM approach whilst remaining computationally efficient.
Journal of Computational and Graphical Statistics | 2017
Francis K. C. Hui; David I. Warton; John T. Ormerod; Viivi Haapaniemi; Sara Taskinen
ABSTRACT Generalized linear latent variable models (GLLVMs) are a powerful class of models for understanding the relationships among multiple, correlated responses. Estimation, however, presents a major challenge, as the marginal likelihood does not possess a closed form for nonnormal responses. We propose a variational approximation (VA) method for estimating GLLVMs. For the common cases of binary, ordinal, and overdispersed count data, we derive fully closed-form approximations to the marginal log-likelihood function in each case. Compared to other methods such as the expectation-maximization algorithm, estimation using VA is fast and straightforward to implement. Predictions of the latent variables and associated uncertainty estimates are also obtained as part of the estimation process. Simulations show that VA estimation performs similar to or better than some currently available methods, both at predicting the latent variables and estimating their corresponding coefficients. They also show that VA estimation offers dramatic reductions in computation time particularly if the number of correlated responses is large relative to the number of observational units. We apply the variational approach to two datasets, estimating GLLVMs to understanding the patterns of variation in youth gratitude and for constructing ordination plots in bird abundance data. R code for performing VA estimation of GLLVMs is available online. Supplementary materials for this article are available online.