Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher C. Drovandi is active.

Publication


Featured researches published by Christopher C. Drovandi.


Biometrics | 2011

Estimation of Parameters for Macroparasite Population Evolution Using Approximate Bayesian Computation

Christopher C. Drovandi; Anthony N. Pettitt

We estimate the parameters of a stochastic process model for a macroparasite population within a host using approximate Bayesian computation (ABC). The immunity of the host is an unobserved model variable and only mature macroparasites at sacrifice of the host are counted. With very limited data, process rates are inferred reasonably precisely. Modeling involves a three variable Markov process for which the observed data likelihood is computationally intractable. ABC methods are particularly useful when the likelihood is analytically or computationally intractable. The ABC algorithm we present is based on sequential Monte Carlo, is adaptive in nature, and overcomes some drawbacks of previous approaches to ABC. The algorithm is validated on a test example involving simulated data from an autologistic model before being used to infer parameters of the Markov process model for experimental data. The fitted model explains the observed extra-binomial variation in terms of a zero-one immunity variable, which has a short-lived presence in the host.


Statistical Science | 2015

Bayesian Indirect Inference Using a Parametric Auxiliary Model

Christopher C. Drovandi; Anthony N. Pettitt; Anthony Lee

Indirect inference (II) is a methodology for estimating the parameters of an intractable (generative) model on the basis of an alternative parametric (auxiliary) model that is both analytically and computationally easier to deal with. Such an approach has been well explored in the classical literature but has received substantially less attention in the Bayesian paradigm. The purpose of this paper is to compare and contrast a collection of what we call parametric Bayesian indirect inference (pBII) methods. One class of pBII methods uses approximate Bayesian computation (referred to here as ABC II) where the summary statistic is formed on the basis of the auxiliary model, using ideas from II. Another approach proposed in the literature, referred to here as parametric Bayesian indirect likelihood (pBIL), we show to be a fundamentally different approach to ABC II. We devise new theoretical results for pBIL to give extra insights into its behaviour and also its differences with ABC II. Furthermore, we examine in more detail the assumptions required to use each pBII method. The results, insights and comparisons developed in this paper are illustrated on simple examples and two other substantive applications. The first of the substantive examples involves performing inference for complex quantile distributions based on simulated data while the second is for estimating the parameters of a trivariate stochastic process describing the evolution of macroparasites within a host based on real data. We create a novel framework called Bayesian indirect likelihood (BIL) which encompasses pBII as well as general ABC methods so that the connections between the methods can be established.


Journal of Computational and Graphical Statistics | 2014

A Sequential Monte Carlo Algorithm to Incorporate Model Uncertainty in Bayesian Sequential Design

Christopher C. Drovandi; James McGree; Anthony N. Pettitt

This article presents a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model that is essentially a function of importance sampling weights. Methods that rely on quadrature for this task suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem-specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from motor neuron disease. Computer code to run one of the examples is provided as online supplementary materials.


Computational Statistics & Data Analysis | 2011

Likelihood-free Bayesian estimation of multivariate quantile distributions

Christopher C. Drovandi; Anthony N. Pettitt

In this paper, we present new multivariate quantile distributions and utilise likelihood-free Bayesian algorithms for inferring the parameters. In particular, we apply a sequential Monte Carlo (SMC) algorithm that is adaptive in nature and requires very little tuning compared with other approximate Bayesian computation algorithms. Furthermore, we present a framework for the development of multivariate quantile distributions based on a copula. We consider bivariate and time series extensions of the g-and-k distribution under this framework, and develop an efficient component-wise updating scheme free of likelihood functions to be used within the SMC algorithm. In addition, we trial the set of octiles as summary statistics as well as functions of these that form robust measures of location, scale, skewness and kurtosis. We show that these modifications lead to reasonably precise inferences that are more closely comparable to computationally intensive likelihood-based inference. We apply the quantile distributions and algorithms to simulated data and an example involving daily exchange rate returns.


Biometrics | 2013

Bayesian experimental design for models with intractable likelihoods.

Christopher C. Drovandi; Anthony N. Pettitt

In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.


Computational Statistics & Data Analysis | 2013

Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data

Christopher C. Drovandi; James McGree; Anthony N. Pettitt

In this paper we present a sequential Monte Carlo algorithm for Bayesian sequential experimental design applied to generalised non-linear models for discrete data. The approach is computationally convenient in that the information of newly observed data can be incorporated through a simple re-weighting step. We also consider a flexible parametric model for the stimulus-response relationship together with a newly developed hybrid design utility that can produce more robust estimates of the target stimulus in the presence of substantial model and parameter uncertainty. The algorithm is applied to hypothetical clinical trial or bioassay scenarios. In the discussion, potential generalisations of the algorithm are suggested to possibly extend its applicability to a wide variety of scenarios.


Bellman Prize in Mathematical Biosciences | 2015

Quantifying uncertainty in parameter estimates for stochastic models of collective cell spreading using approximate Bayesian computation.

Brenda N. Vo; Christopher C. Drovandi; Anthony N. Pettitt; Matthew J. Simpson

Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2-6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.


PLOS ONE | 2016

Bayesian Estimation of Small Effects in Exercise and Sports Science

Kerrie Mengersen; Christopher C. Drovandi; Christian P. Robert; David B. Pyne; Christopher J. Gore

The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.


Journal of Computational and Graphical Statistics | 2018

Bayesian Synthetic Likelihood

Leah F. Price; Christopher C. Drovandi; Anthony Lee; David J. Nott

ABSTRACT Having the ability to work with complex models can be highly beneficial. However, complex models often have intractable likelihoods, so methods that involve evaluation of the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a viable alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which uses a multivariate normal approximation of the distribution of a set of summary statistics. This article explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the SL, when the summary statistics have a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this article. Supplemental materials are available online. Computer code for implementing the methods on all examples is available at https://github.com/cdrovandi/Bayesian-Synthetic-Likelihood.


Statistical Communications in Infectious Diseases | 2011

Using approximate Bayesian computation to estimate transmission rates of nosocomial pathogens

Christopher C. Drovandi; Anthony N. Pettitt

In this paper, we apply a simulation based approach for estimating transmission rates of nosocomial pathogens. In particular, the objective is to infer the transmission rate between colonised health-care practitioners and uncolonised patients (and vice versa) solely from routinely collected incidence data. The method, using approximate Bayesian computation, is substantially less computer intensive and easier to implement than likelihood-based approaches we refer to here. We find through replacing the likelihood with a comparison of an efficient summary statistic between observed and simulated data that little is lost in the precision of estimated transmission rates. Furthermore, we investigate the impact of incorporating uncertainty in previously fixed parameters on the precision of the estimated transmission rates.

Collaboration


Dive into the Christopher C. Drovandi's collaboration.

Top Co-Authors

Avatar

Anthony N. Pettitt

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

James McGree

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kerrie Mengersen

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

David J. Nott

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minh-Ngoc Tran

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Brenda N. Vo

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Brodie A. J. Lawson

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

James D. Doecke

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Jurgen Fripp

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Researchain Logo
Decentralizing Knowledge