Jeremy E. Oakley
University of Sheffield
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeremy E. Oakley.
Archive | 2006
Anthony O'Hagan; Caitlin E. Buck; Alireza Daneshkhah; J. Richard Eiser; Paul H. Garthwaite; David Jenkinson; Jeremy E. Oakley; Tim Rakow
Elicitation is the process of extracting expert knowledge about some unknown quantity or quantities, and formulating that information as a probability distribution. Elicitation is important in situations, such as modelling the safety of nuclear installations or assessing the risk of terrorist attacks, where expert knowledge is essentially the only source of good information. It also plays a major role in other contexts by augmenting scarce observational data, through the use of Bayesian statistical methods. However, elicitation is not a simple task, and practitioners need to be aware of a wide range of research findings in order to elicit expert judgements accurately and reliably. Uncertain Judgements introduces the area, before guiding the reader through the study of appropriate elicitation methods, illustrated by a variety of multi-disciplinary examples.
Reliability Engineering & System Safety | 2004
Anthony O'Hagan; Jeremy E. Oakley
Abstract There are difficulties with probability as a representation of uncertainty. However, we argue that there is an important distinction between principle and practice. In principle, probability is uniquely appropriate for the representation and quantification of all forms of uncertainty; it is in this sense that we claim that ‘probability is perfect’. In practice, people find it difficult to express their knowledge and beliefs in probabilistic form, so that elicitation of probability distributions is a far from perfect process. We therefore argue that there is no need for alternative theories, but that any practical elicitation of expert knowledge must fully acknowledge imprecision in the resulting distribution. We outline a recently developed Bayesian technique that allows the imprecision in elicitation to be formulated explicitly, and apply it to some of the challenge problems.
Medical Decision Making | 2014
Mark Strong; Jeremy E. Oakley; Alan Brennan
The partial expected value of perfect information (EVPI) quantifies the expected benefit of learning the values of uncertain parameters in a decision model. Partial EVPI is commonly estimated via a 2-level Monte Carlo procedure in which parameters of interest are sampled in an outer loop, and then conditional on these, the remaining parameters are sampled in an inner loop. This is computationally demanding and may be difficult if correlation between input parameters results in conditional distributions that are hard to sample from. We describe a novel nonparametric regression-based method for estimating partial EVPI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method is applicable in a model of any complexity and with any specification of input parameter distribution. We describe the implementation of the method via 2 nonparametric regression modeling approaches, the Generalized Additive Model and the Gaussian process. We demonstrate in 2 case studies the superior efficiency of the regression method over the 2-level Monte Carlo method. R code is made available to implement the method.
Medical Decision Making | 2004
Matt Stevenson; Jeremy E. Oakley; Jim Chilcott
Individual patient-level models can simulate more complex disease processes than cohort-based approaches. However, large numbers of patients need to be simulated to reduce 1storder uncertainty, increasing the computational time required and often resulting in the inability to perform extensive sensitivity analyses. A solution, employing Gaussian process techniques, is presented using a case study, evaluating the cost-effectiveness of a sample of treatments for established osteoporosis. The Gaussian process model accurately formulated a statistical relationship between the inputs to the individual patient model and its outputs. This model reducedthe time required for future runs from 150 min to virtually-instantaneous, allowing probabilistic sensitivity analyses-to be undertaken. This reduction in computational time was achieved with minimal loss in accuracy. The authors believe that this case study demonstrates the value of this technique in handling 1st- and 2nd-order uncertainty in the context of health economic modeling, particularly when more widely used techniques are computationally expensive or are unable to accurately model patient histories.
Technometrics | 2013
Thomas E. Fricker; Jeremy E. Oakley; Nathan M. Urban
The Gaussian process regression model is a popular type of “emulator” used as a fast surrogate for computationally expensive simulators (deterministic computer models). For simulators with multivariate output, common practice is to specify a separable covariance structure for the Gaussian process. Though computationally convenient, this can be too restrictive, leading to poor performance of the emulator, particularly when the different simulator outputs represent different physical quantities. Also, treating the simulator outputs as independent can lead to inappropriate representations of joint uncertainty. We develop nonseparable covariance structures for Gaussian process emulators, based on the linear model of coregionalization and convolution methods. Using two case studies, we compare the performance of these covariance structures both with standard separable covariance structures and with emulators that assume independence between the outputs. In each case study, we find that only emulators with nonseparable covariances structures have sufficient flexibility both to give good predictions and to represent joint uncertainty about the simulator outputs appropriately. This article has supplementary material online.
The Statistician | 2002
Jeremy E. Oakley
Summary. We consider the problem of eliciting expert knowledge about the output of a deterministic computer code, where the output is a function of a vector of input variables. A Gaussian process prior is assumed for the unknown function, and expert judgments about the output at various inputs are used to find suitable hyperparameters of the Gaussian process prior distribution. An example is presented involving the movement of radionuclides in the food chain.
Journal of Health Economics | 2010
Jeremy E. Oakley; Alan Brennan; Paul Tappenden; Jim Chilcott
Partial expected value of perfect information (EVPI) quantifies the value of removing uncertainty about unknown parameters in a decision model. EVPIs can be computed via Monte Carlo methods. An outer loop samples values of the parameters of interest, and an inner loop samples the remaining parameters from their conditional distribution. This nested Monte Carlo approach can result in biased estimates if small numbers of inner samples are used and can require a large number of model runs for accurate partial EVPI estimates. We present a simple algorithm to estimate the EVPI bias and confidence interval width for a specified number of inner and outer samples. The algorithm uses a relatively small number of model runs (we suggest approximately 600), is quick to compute, and can help determine how many outer and inner iterations are needed for a desired level of accuracy. We test our algorithm using three case studies.
Environmental Modelling and Software | 2014
David E. Morris; Jeremy E. Oakley; John A. Crowe
We present a web-based probability distribution elicitation tool: The MATCH Uncertainty Elicitation Tool. The Tool is designed to help elicit probability distributions about uncertain model parameters from experts, in situations where suitable data is either unavailable or sparse. The Tool is free to use, and offers five different techniques for eliciting univariate probability distributions. A key feature of the Tool is that users can log in from different sites and view and interact with the same graphical displays, so that expert elicitation sessions can be conducted remotely (in conjunction with tele- or videoconferencing). This will make probability elicitation easier in situations where it is difficult to interview experts in person. Even when conducting elicitation remotely, interviewers will be able to follow good elicitation practice, advise the experts, and provide instantaneous feedback and assistance.
Medical Decision Making | 2013
Mark Strong; Jeremy E. Oakley
The value of learning an uncertain input in a decision model can be quantified by its partial expected value of perfect information (EVPI). This is commonly estimated via a 2-level nested Monte Carlo procedure in which the parameter of interest is sampled in an outer loop, and then conditional on this sampled value, the remaining parameters are sampled in an inner loop. This 2-level method can be difficult to implement if the joint distribution of the inner-loop parameters conditional on the parameter of interest is not easy to sample from. We present a simple alternative 1-level method for calculating partial EVPI for a single parameter that avoids the need to sample directly from the potentially problematic conditional distributions. We derive the sampling distribution of our estimator and show in a case study that it is both statistically and computationally more efficient than the 2-level method.
Journal of Health Services Research & Policy | 2008
Jonathan Karnon; Aileen McIntosh; Joanne Dean; Peter A. Bath; Allen Hutchinson; Jeremy E. Oakley; Nicky Thomas; Peter Pratt; Louise Freeman-Parry; Ben-Tzion Karsh; Tejal K. Gandhi; Paul Tappenden
Objectives The aim of this study is to estimate the potential costs and benefits of three key interventions (computerized physician order entry [CPOE], additional ward pharmacists and bar coding) to help prioritize research to reduce medication errors. Methods A generic model structure was developed to describe the incidence and impacts of medication errors in hospitals. The model follows pathways from medication error points at alternative stages of the medication pathway through to the outcomes of undetected errors. The model was populated from a systematic review of the medication errors literature combined with novel probabilistic calibration methods. Cost ranges were applied to the interventions, the treatment of preventable adverse drug events (pADEs), and the value of the health lost as a result of an ADE. Results The model predicts annual health service costs of between £0.3 million and £1 million for the treatment of pADEs in a 400-bed acute hospital in the UK. Including only health service costs, it is uncertain whether any of the three interventions will produce positive net benefits, particularly if high intervention costs are assumed. When the monetary value of lost health is included, all three interventions have a high probability of producing positive net benefits with a mean estimate of around £31.5 million for CPOE over a five-year time horizon. Conclusions The results identify the potential cost-effectiveness of interventions aimed at medication errors, as well as identifying key drivers of cost-effectiveness that should be specifically addressed in the design of primary evaluations of medication error interventions.