Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony O’Hagan is active.

Publication


Featured researches published by Anthony O’Hagan.


Reliability Engineering & System Safety | 2006

Case studies in Gaussian process modelling of computer codes

Marc C. Kennedy; Clive W. Anderson; Stefano Conti; Anthony O’Hagan

Abstract In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics.


PharmacoEconomics | 2000

Inference for the Cost-Effectiveness Acceptability Curve and Cost-Effectiveness Ratio

Anthony O’Hagan; John Stevens; Jacques Montmartin

AbstractThe aim of this article is to consider Bayesian and frequentist inference methods for measures of incremental cost effectiveness in data obtained via a clinical trial. The most useful measure is the cost-effectiveness (C/E) acceptability curve. Recent publications on Bayesian estimation have assumed a normal posterior distribution, which ignores uncertainty in estimated variances, and suggest unnecessarily complicated methods of computation. We present a simple Bayesian computation for the C/E acceptability curve and a simple frequentist analogue. Our approach takes account of errors in estimated variances, resulting in calculations that are based on distributions rather than normal distributions.If inference is required about theC/E ratio,we argue that the standard frequentist procedures give unreliable or misleading inferences, and present instead a Bayesian interval.


Statistical Methods in Medical Research | 2002

Bayesian methods for design and analysis of cost-effectiveness trials in the evaluation of health care technologies.

Anthony O’Hagan; John Stevens

We review the development of Bayesian statistical methods for the design and analysis of randomized controlled trials in the assessment of the cost-effectiveness of health care technologies. We place particular emphasis on the benefits of the Bayesian approach; the implications of skew cost data; the need to model the data appropriately to generate efficient and robust inferences instead of relying on distribution-free methods; the importance of making full use of quantitative and structural prior information to produce realistic inferences; and issues in the determination of sample size. Several new examples are presented to illustrate the methods. We conclude with a discussion of the key areas for future research.


Archive | 2002

Bayesian Analysis of Computer Model Outputs

Jeremy E. Oakley; Anthony O’Hagan

We consider various statistical problems associated with the use of complex deterministic computer models. In particular, we focus on exploring the uncertainty in the output of the model that is induced by uncertainty in some or all of the model input parameters. In addition, we consider the case when the computer model is computationally expensive, so that it is necessary to be able to describe the output uncertainty based on a small number of runs of the model itself.


Reliability Engineering & System Safety | 2008

Comments on articles in RESS special issue

Anthony O’Hagan

The authors are to be congratulated on their attempt to extract results from the unique Delft database. Their conclusions are interesting and thought-provoking. However, I am concerned about the analysis of individual question effects. As in so many aspects of elicitation, I think it is helpful to disentangle the uncertainties due to lack of knowledge, i.e. epistemic uncertainty, from those due to intrinsic randomness, i.e. aleatory uncertainty. Consider the questions in the Dike Ring study that are listed in Table 2. All of these questions either implicitly or explicitly ask for opinions about individual instances which are subject to aleatory uncertainty. For instance, question Hs asks about wave height for “a randomly chosen occurrence” (of some phenomenon). The randomness here and in question Ts is explicit, but it is implicit in the other questions. Of course, the expert will also have epistemic uncertainty concerning properties of the population of random instances, such as the mean. In general, I believe it is good practice in elicitation to ask separately about sources of epistemic and aleatory uncertainty, and before continuing to discuss this paper I think it is worthwhile elaborating a little on my view because it is not that of those whose elicitations are found in the Delft database. Theirs is the view that we should only ever ask experts about observable random variables. According to that view, the mean ratio of wave heights (in the population of all occurrences of the phenomenon under study) is not observable and so experts’ opinions about it should not be elicited. In contrast, it is my experience that it is perfectly possible to conduct meaningful elicitation about such quantities, and that to do so has the important benefit of separately eliciting epistemic uncertainty. Amongst other advantages, this helps to counter over-confidence (which is major concern of the present paper), and by modelling dependencies appropriately it avoids having to elicit them. Returning to the Dike Ring study, the authors find better calibration on questions Ts and Hs than on the others, and I suggest that this may be at least in part due to the fact that the aleatory uncertainty is more explicit in the wording of these questions. Respondents to the other questions may have failed to acknowledge the randomness fully, concentrating instead on uncertainty about the mean. This in itself would explain the higher degree of over-confidence found for those questions.


PharmacoEconomics | 2012

Granulocyte-Colony Stimulating Factor Use and Medical Costs after Initial Adjuvant Chemotherapy in Older Patients with Early-Stage Breast Cancer

Robert I. Griffiths; Richard Barron; Michelle Gleeson; Mark D. Danese; Anthony O’Hagan; Victoria M. Chia; Jason Legg; Gary H. Lyman

AbstractBackground: Granulocyte-colony stimulating factor (G-CSF) reduces the risk of severe neutropenia associated with chemotherapy, but its cost implications following chemotherapy are unknown. Objective: Our objective was to examine associations between G-CSF use and medical costs after initial adjuvant chemotherapy in early-stage (stage I–III) breast cancer (ESBC). Methods: Women diagnosed with ESBC from 1999 to 2005, who had an initial course of chemotherapy beginning within 180 days of diagnosis and including ≥1 highly myelosuppressive agent, were identified from the Surveillance, Epidemiology, and End Results (SEER)-Medicare database.Medicare claims were used to describe the initial chemotherapy regimen according to the classes of agents used: anthracycline ([A]: doxorubicin or epirubicin); cyclophosphamide (C); taxane ([T]: paclitaxel or docetaxel); and fluorouracil (F). Patients were classified into four study groups according to their G-CSF use: (i) primary prophylaxis, if the first G-CSF claim was within 5 days of the start of the first chemotherapy cycle; (ii) secondary prophylaxis, if the first claim was within 5 days of the start of the second or subsequent cycles; (iii)G-CSF treatment, if the first claim occurred outside of prophylactic use; and (iv) no G-CSF. Patients were described by age, race, year of diagnosis, stage, grade, estrogen (ER) and progesterone (PR) receptor status, National Cancer Institute (NCI) Co-morbidity Index, chemotherapy regimen and G-CSF use.Total direct medical costs (


Stochastic Environmental Research and Risk Assessment | 2013

Quantifying uncertainty in remotely sensed land cover maps

Edward Cripps; Anthony O’Hagan; Tristan Quaife

US, year 2009 values) to Medicare were estimated from 4 weeks after the last chemotherapy administration up to 48 months. Medical costs included those for ESBC treatment and all other medical services received after chemotherapy.Least squares regression, using inverse probability weighting (IPW) to account for censoring within the cohort, was used to evaluate adjusted associations between G-CSF use and costs. Results: A total of 7026 patients were identified, with an average age of 72 years, of which 63% had stage II disease, and 59% were ER and/or PR positive. Compared with no G-CSF, those receiving G-CSF primary prophylaxis were more likely to have stage III disease (30% vs 16%; p < 0.0001), to be diagnosed in 2003–5 (87% vs 26%; p < 0.0001), and to receive dose-dense AC-T (26% vs 1%; p < 0.0001), while they were less likely to receive an F-based regimen (12% vs 42%; p < 0.0001).Overall, the estimated average direct medical cost over 48months after initial chemotherapy was


Psychometrika | 2015

Advances in Modeling Model Discrepancy: Comment on Wu and Browne (2015)

Robert C. MacCallum; Anthony O’Hagan

US42 628. In multivariate analysis, stage II or III diagnosis (compared with stage I),NCI Co-morbidity Index score 1 or ≥2 (compared with 0), or FAC or standard AC-T (each compared with AC) were associated with significantly higher IPW 48-month costs. Adjusting for patient demographic and clinical factors, costs in the G-CSF primary prophylaxis group were not significantly different from those not receiving primary prophylaxis (the other three study groups combined). In an analysis that included four separate study groups, G-CSF treatment was associated with significantly greater costs (incremental cost =


Volume 1: Plant Operations, Maintenance, Engineering, Modifications, Life Cycle and Balance of Plant; Nuclear Fuel and Materials; Plant Systems, Structures and Components; Codes, Standards, Licensing and Regulatory Issues | 2014

Testing of Statistical Procedures for Use in Optimization of Reactor Performance Under Aged Conditions

Dumitru Serghiuta; John Tholammakkil; Naj Hammouda; Anthony O’Hagan

US2938; 95% CI 285, 5590) than no G-CSF. Conclusions: Direct medical costs after initial chemotherapy were not statistically different between those receiving G-CSF primary prophylaxis and those receiving no G-CSF, after adjusting for potential confounders.


Archive | 2003

A Bayesian Approach for the Estimation of the Covariance Structure of Separable Spatio-Temporal Stochastic Processes

Silvia Bozza; Anthony O’Hagan

Remotely sensed land cover maps are increasingly used as inputs into environmental simulation models whose outputs inform decisions and policy-making. Risks associated with these decisions are dependent on model output uncertainty, which is in turn affected by the uncertainty of land cover inputs. This article presents a method of quantifying the uncertainty that results from potential mis-classification in remotely sensed land cover maps. In addition to quantifying uncertainty in the classification of individual pixels in the map, we also address the important case where land cover maps have been upscaled to a coarser grid to suit the users’ needs and are reported as proportions of land cover type. The approach is Bayesian and incorporates several layers of modelling but is straightforward to implement. First, we incorporate data in the confusion matrix derived from an independent field survey, and discuss the appropriate way to model such data. Second, we account for spatial correlation in the true land cover map, using the remotely sensed map as a prior. Third, spatial correlation in the mis-classification characteristics is induced by modelling their variance. The result is that we are able to simulate posterior means and variances for individual sites and the entire map using a simple Monte Carlo algorithm. The method is applied to the Land Cover Map 2000 for the region of England and Wales, a map used as an input into a current dynamic carbon flux model.

Collaboration


Dive into the Anthony O’Hagan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Stevens

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Edward Cripps

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge