Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony O'Hagan is active.

Publication


Featured researches published by Anthony O'Hagan.


Journal of The Royal Statistical Society Series B-statistical Methodology | 2001

Bayesian Calibration of computer models

Marc C. Kennedy; Anthony O'Hagan

We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the models input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.


Journal of the American Statistical Association | 2005

Statistical Methods for Eliciting Probability Distributions

Paul H. Garthwaite; Joseph B. Kadane; Anthony O'Hagan

Elicitation is a key task for subjectivist Bayesians. Although skeptics hold that elicitation cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subject-matter expert colleagues. This article reviews the state of the art, reflecting the experience of statisticians informed by the fruits of a long line of psychological research into how people represent uncertain information cognitively and how they respond to questions about that information. In a discussion of the elicitation process, the first issue to address is what it means for an elicitation to be successful; that is, what criteria should be used. Our answer is that a successful elicitation faithfully represents the opinion of the person being elicited. It is not necessarily “true” in some objectivistic sense, and cannot be judged in that way. We see that elicitation as simply part of the process of statistical modeling. Indeed, in a hierarchical model at which point the likelihood ends and the prior begins is ambiguous. Thus the same kinds of judgment that inform statistical modeling in general also inform elicitation of prior distributions. The psychological literature suggests that people are prone to certain heuristics and biases in how they respond to situations involving uncertainty. As a result, some of the ways of asking questions about uncertain quantities are preferable to others, and appear to be more reliable. However, data are lacking on exactly how well the various methods work, because it is unclear, other than by asking using an elicitation method, just what the person believes. Consequently, one is reduced to indirect means of assessing elicitation methods. The tool chest of methods is growing. Historically, the first methods involved choosing hyperparameters using conjugate prior families, at a time when these were the only families for which posterior distributions could be computed. Modern computational methods, such as Markov chain Monte Carlo, have freed elicitation from this constraint. As a result, now both parametric and nonparametric methods are available for low-dimensional problems. High-dimensional problems are probably best thought of as lacking another hierarchical level, which has the effect of reducing the as-yet-unelicited parameter space. Special considerations apply to the elicitation of group opinions. Informal methods, such as Delphi, encourage the participants to discuss the issue in the hope of reaching consensus. Formal methods, such as weighted averages or logarithmic opinion pools, each have mathematical characteristics that are uncomfortable. Finally, there is the question of what a group opinion even means, because it is not necessarily the opinion of any participant.


Reliability Engineering & System Safety | 2006

Bayesian analysis of computer code outputs: A tutorial

Anthony O'Hagan

The Bayesian approach to quantifying, analysing and reducing uncertainty in the application of complex process models is attracting increasing attention amongst users of such models. The range and power of the Bayesian methods is growing and there is already a sizeable literature on these methods. However, most of it is in specialist statistical journals. The purpose of this tutorial is to introduce the more general reader to the Bayesian approach.


Archive | 2006

Uncertain Judgements: Eliciting Experts' Probabilities: O'Hagan/Uncertain Judgements: Eliciting Experts' Probabilities

Anthony O'Hagan; Caitlin E. Buck; Alireza Daneshkhah; J. Richard Eiser; Paul H. Garthwaite; David Jenkinson; Jeremy E. Oakley; Tim Rakow

Elicitation is the process of extracting expert knowledge about some unknown quantity or quantities, and formulating that information as a probability distribution. Elicitation is important in situations, such as modelling the safety of nuclear installations or assessing the risk of terrorist attacks, where expert knowledge is essentially the only source of good information. It also plays a major role in other contexts by augmenting scarce observational data, through the use of Bayesian statistical methods. However, elicitation is not a simple task, and practitioners need to be aware of a wide range of research findings in order to elicit expert judgements accurately and reliably. Uncertain Judgements introduces the area, before guiding the reader through the study of appropriate elicitation methods, illustrated by a variety of multi-disciplinary examples.


Health Economics | 2011

Review of statistical methods for analysing healthcare resources and costs.

Borislava Mihaylova; Andrew Briggs; Anthony O'Hagan; Simon G. Thompson

We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright


Technometrics | 2009

Diagnostics for Gaussian Process Emulators

Leonardo S. Bastos; Anthony O'Hagan

Mathematical models, usually implemented in computer programs known as simulators, are widely used in all areas of science and technology to represent complex real-world phenomena. Simulators are often so complex that they take appreciable amounts of computer time or other resources to run. In this context, a methodology has been developed based on building a statistical representation of the simulator, known as an emulator. The principal approach to building emulators uses Gaussian processes. This work presents some diagnostics to validate and assess the adequacy of a Gaussian process emulator as surrogate for the simulator. These diagnostics are based on comparisons between simulator outputs and Gaussian process emulator outputs for some test data, known as validation data, defined by a sample of simulator runs not used to build the emulator. Our diagnostics take care to account for correlation between the validation data. To illustrate a validation procedure, we apply these diagnostics to two different data sets.


Reliability Engineering & System Safety | 2004

Probability is Perfect, but We Can't Elicit it Perfectly

Anthony O'Hagan; Jeremy E. Oakley

Abstract There are difficulties with probability as a representation of uncertainty. However, we argue that there is an important distinction between principle and practice. In principle, probability is uniquely appropriate for the representation and quantification of all forms of uncertainty; it is in this sense that we claim that ‘probability is perfect’. In practice, people find it difficult to express their knowledge and beliefs in probabilistic form, so that elicitation of probability distributions is a far from perfect process. We therefore argue that there is no need for alternative theories, but that any practical elicitation of expert knowledge must fully acknowledge imprecision in the resulting distribution. We outline a recently developed Bayesian technique that allows the imprecision in elicitation to be formulated explicitly, and apply it to some of the challenge problems.


Journal of Statistical Planning and Inference | 1991

Bayes–Hermite quadrature

Anthony O'Hagan

Abstract Bayesian quadrature treats the problem of numerical integration as one of statistical inference. A prior Gaussian process distribution is assumed for the integrand, observations arise from evaluating the integrand at selected points, and a posterior distribution is derived for the integrand and the integral. Methods are developed for quadrature in R p. A particular application is integrating the posterior density arising from some other Bayesian analysis. Simulation results are presented, to show that the resulting Bayes–Hermite quadrature rules may perform better than the conventional Gauss–Hermite rules for this application. A key result is derived for product designs, which makes Bayesian quadrature practically useful for integrating in several dimensions. Although the method does not at present provide a solution to the more difficult problem of quadrature in high dimensions, it does seem to offer real improvements over existing methods in relatively low dimensions.


BMJ | 2003

Modelling the cost effectiveness of interferon beta and glatiramer acetate in the management of multiple sclerosis. Commentary: evaluating disease modifying treatments in multiple sclerosis.

Jim Chilcott; Christopher McCabe; Paul Tappenden; Anthony O'Hagan; Nicola J. Cooper; Keith R. Abrams; Karl Claxton; David H. Miller

Abstract Objective: To evaluate the cost effectiveness of four disease modifying treatments (interferon betas and glatiramer acetate) for relapsing remitting and secondary progressive multiple sclerosis in the United Kingdom. Design: Modelling cost effectiveness. Setting: UK NHS. Participants: Patients with relapsing remitting multiple sclerosis and secondary progressive multiple sclerosis. Main outcome measures: Cost per quality adjusted life year gained. Results: The base case cost per quality adjusted life year gained by using any of the four treatments ranged from £42 000 (


Medical Decision Making | 2007

Calculating Partial Expected Value of Perfect Information via Monte Carlo Sampling Algorithms

Alan Brennan; Samer A. Kharroubi; Anthony O'Hagan; Jim Chilcott

66 469; €61 630) to £98 000 based on efficacy information in the public domain. Uncertainty analysis suggests that the probability of any of these treatments having a cost effectiveness better than £20 000 at 20 years is below 20%. The key determinants of cost effectiveness were the time horizon, the progression of patients after stopping treatment, differential discount rates, and the price of the treatments. Conclusions: Cost effectiveness varied markedly between the interventions. Uncertainty around point estimates was substantial. This uncertainty could be reduced by conducting research on the true magnitude of the effect of these drugs, the progression of patients after stopping treatment, the costs of care, and the quality of life of the patients. Price was the key modifiable determinant of the cost effectiveness of these treatments. What is already known on this topic Interferon beta and glatiramer acetate are the only disease modifying therapies used to treat multiple sclerosis Economic evaluations of these drugs have had flaws in the specification of the course of the disease, efficacy, duration of treatment, mortality, and the analysis of uncertainty None of the existing estimates of cost effectiveness can be viewed as robust What this study adds The cost per quality adjusted life year gained is unlikely to be less than £40 000 for interferon beta or glatiramer acetate Experience after stopping treatment is a key determinant of the cost effectiveness of these therapies Key factors affecting point estimates of cost effectiveness are the cost of interferon beta and glatiramer acetate, the effect of these therapies on disease progression, and the time horizon evaluated

Collaboration


Dive into the Anthony O'Hagan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Stevens

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim Chilcott

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge