Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karl Claxton is active.

Publication


Featured researches published by Karl Claxton.


Journal of Health Economics | 1999

The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies

Karl Claxton

The literature which considers the statistical properties of cost-effectiveness analysis has focused on estimating the sampling distribution of either an incremental cost-effectiveness ratio or incremental net benefit for classical inference. However, it is argued here that rules of inference are arbitrary and entirely irrelevant to the decisions which clinical and economic evaluations claim to inform. Decisions should be based only on the mean net benefits irrespective of whether differences are statistically significant or fall outside a Bayesian range of equivalence. Failure to make decisions in this way by accepting the arbitrary rules of inference will impose costs which can be measured in terms of resources or health benefits forgone. The distribution of net benefit is only relevant to deciding whether more information is required. A framework for decision making and establishing the value of additional information is presented which is consistent with the decision rules in CEA. This framework can distinguish the simultaneous but conceptually separate steps of deciding which alternatives should be chosen, given existing information, from the question of whether more information should be acquired. It also ensures that the type of information acquired is driven by the objectives of the health care system, is consistent with the budget constraint on service provision and that research is designed efficiently.


PharmacoEconomics | 2008

The NICE cost-effectiveness threshold: what it is and what that means.

Christopher McCabe; Karl Claxton; Anthony J. Culyer

The National Institute for Health and Clinical Excellence (NICE) has been using a cost-effectiveness threshold range between £20 000 and £30 000 for over 7 years. What the cost-effectiveness threshold represents, what the appropriate level is for NICE to use, and what the other factors are that NICE should consider have all been the subject of much discussion. In this article, we briefly review these questions, provide a critical assessment of NICE’s utilization of the incremental cost-effectiveness ratio (ICER) threshold to inform its guidance, and suggest ways in which NICE’s utilization of the ICER threshold could be developed to promote the efficient use of health service resources.We conclude that it is feasible and probably desirable to operate an explicit single threshold rather than the current range; the threshold should be seen as a threshold at which ‘other’ criteria beyond the ICER itself are taken into account; interventions with a large budgetary impact may need to be subject to a lower threshold as they are likely to displace more than the marginal activities; reimbursement at the threshold transfers the full value of an innovation to the manufacturer.Positive decisions above the threshold on the grounds of innovation reduce population health; the value of the threshold should be reconsidered regularly to ensure that it captures the impact of changes in efficiency and budget over time; the use of equity weights to sustain a positive recommendation when the ICER is above the threshold requires knowledge of the equity characteristics of those patients who bear the opportunity cost. Given the barriers to obtaining this knowledge and knowledge about the characteristics of typical beneficiaries of UK NHS care, caution is warranted before accepting claims from special pleaders; uncertainty in the evidence base should not be used to justify a positive recommendation when the ICER is above the threshold. The development of a programme of disinvestment guidance would enable NICE and the NHS to be more confident that the net health benefit of the Technology Appraisal Programme is positive.


The Lancet | 2002

A rational framework for decision making by the National Institute For Clinical Excellence (NICE)

Karl Claxton; Mark Sculpher; Michael Drummond

Regulatory and reimbursement authorities face uncertain choices when considering the adoption of health-care technologies. In this Viewpoint, we present an analytic framework that separates the issue of whether a technology should be adopted on the basis of existing evidence from whether more research should be demanded to support future decisions. We show the application of this framework to the assessment of heath-care technologies using a published analysis of a new drug treatment for Alzheimers disease. The results of the analysis show that the amount and type of evidence required to support the adoption of a health technology will differ substantially between technologies with different characteristics. Additionally, the analysis can be used to aid the efficient design of research. We discuss the implications of adoption of this new framework for regulatory and reimbursement decisions.


Health Economics | 1996

An economic approach to clinical trial design and research priority-setting

Karl Claxton; John Posnett

Whilst significant advances have been made in persuading clinical researchers of the value of conducting economic evaluation alongside clinical trials, a number of problems remain. The most fundamental is the fact that economic principles are almost entirely ignored in the traditional approach to trial design. For example, in the selection of an optimal sample size no consideration is given to the marginal costs or benefits of sample information. In the traditional approach this can lead to either unbounded or arbitrary sample sizes. This paper presents a decision-analytic approach to trial design which takes explicit account of the costs of sampling, the benefits of sample information and the decision rules of cost-effectiveness analysis. It also provides a consistent framework for setting priorities in research funding and establishes a set of screens (or hurdles) to evaluate the potential cost-effectiveness of research proposals. The framework permits research priority setting based explicitly on the budget constraint faced by clinical practitioners and on the information available prior to prospective research. It demonstrates the link between the value of clinical research and the budgetary restrictions on service provision, and it provides practical tools to establish the optimal allocation of resources between areas of clinical research or between service provision and research.


PharmacoEconomics | 2006

Good Practice Guidelines for Decision-Analytic Modelling in Health Technology Assessment A Review and Consolidation of Quality Assessment

Zoë Philips; Laura Bojke; Mark Sculpher; Karl Claxton; Su Golder

The use of decision-analytic modelling for the purpose of health technology assessment (HTA) has increased dramatically in recent years. Several guidelines for best practice have emerged in the literature; however, there is no agreed standard for what constitutes a ‘good model’ or how models should be formally assessed. The objective of this paper is to identify, review and consolidate existing guidelines on the use of decision-analytic modelling for the purpose of HTA and to develop a consistent framework against which the quality of models may be assessed.The review and resultant framework are summarised under the three key themes of Structure, Data and Consistency. ‘Structural’ aspects relate to the scope and mathematical structure of the model including the strategies under evaluation. Issues covered under the general heading of ‘Data’ include data identification methods and how uncertainty should be addressed. ‘Consistency’ relates to the overall quality of the model.The review of existing guidelines showed that although authors may provide a consistent message regarding some aspects of modelling, such as the need for transparency, they are contradictory in other areas. Particular areas of disagreement are how data should be incorporated into models and how uncertainty should be assessed.For the purpose of evaluation, the resultant framework is applied to a decision-analytic model developed as part of an appraisal for the National Institute for Health and Clinical Excellence (NICE) in the UK. As a further assessment, the review based on the framework is compared with an assessment provided by an independent experienced modeller not using the framework.It is hoped that the framework developed here may form part of the appraisals process for assessment bodies such as NICE and decision models submitted to peer review journals. However, given the speed with which decision-modelling methodology advances, there is a need for its continual update.


Medical Decision Making | 2004

Expected Value of Sample Information Calculations in Medical Decision Modeling

Ae Ades; G. Lu; Karl Claxton

There has been an increasing interest in using expected value of information (EVI) theory in medical decision making, to identify the need for further research to reduce uncertainty in decision and as a tool for sensitivity analysis. Expected value of sample information (EVSI) has been proposed for determination of optimum sample size and allocation rates in randomized clinical trials. This article derives simple Monte Carlo, or nested Monte Carlo, methods that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured. In particular, information on key decision parameters such as treatment efficacy are invariably available on measures of relative efficacy such as risk differences or odds ratios, but not on model parameters themselves. In addition, estimates of model parameters and of relative effect measures in the literature may be heterogeneous, reflecting additional sources of variation besides statistical sampling error. The authors describe Monte Carlo procedures for calculating EVSI for probability, rate, or continuous variable parameters in multi parameter decision models and approximate methods for relative measures such as risk differences, odds ratios, risk ratios, and hazard ratios. Where prior evidence is based on a random effects meta-analysis, the authors describe different ESVI calculations, one relevant for decisions concerning a specific patient group and the other for decisions concerning the entire population of patient groups. They also consider EVSI methods for new studies intended to update information on both baseline treatment efficacy and the relative efficacy of 2 treatments. Although there are restrictions regarding models with prior correlation between parameters, these methods can be applied to the majority of probabilistic decision models. Illustrative worked examples of EVSI calculations are given in an appendix.


PharmacoEconomics | 2000

Assessing Quality in Decision Analytic Cost-Effectiveness Models A Suggested Framework and Example of Application

Mark Sculpher; Elisabeth Fenwick; Karl Claxton

AbstractDespite the growing use of decision analytic modelling in cost-effectiveness analysis, there is a relatively small literature on what constitutes good practice in decision analysis. The aim of this paper is to consider the concept of ‘validity’ and ‘quality’ in this area of evaluation, and to suggest a framework by which quality can be demonstrated on the part of the analyst and assessed by the reviewer and user.The paper begins by considering the purpose of cost-effectiveness models and argues that the their role is to identify optimum treatment decisions in the context of uncertainty about future states of the world. The issue of whether such models can be defined as ‘scientific’ is considered. The notion that decision analysis undertaken at time t can only be considered scientific if its outputs closely predict the results of a trial undertaken at time t+1 is rejected as this ignores the need to make decisions on the basis of currently available evidence. Rather, the scientific characteristic of decision models is based on the fact that, in principle at least, such analyses can be falsified by comparison of two states of the world, one where resource allocation decisions are based on formal decision analysis and the other where such decisions are not. This section of the paper also rejects the idea of exact codification of scientific method in general, and of decision analysis in particular, as this risks rejecting potentially valuable models, may discourage the development of novel methods and can distort research priorities. However, the paper argues that it is both possible and necessary to develop a framework for assessing quality in decision models.Building on earlier work, various dimensions of quality in decision modelling are considered: model structure (disease states, options, time horizon and cycle length); data (identification, incorporation, handling uncertainty); and consistency (internal and external). Within this taxonomy a (nonexhaustive) list of questions about quality is suggested which are illustrated by their application to a specific published model. The paper argues that such a framework can never be prescriptive about every aspect of decision modelling. Rather, it should encourage the analyst to provide an explicit and comprehensive justification of their methods, and allow the user of the model to make an informed judgment about the relevance, coherence and usefulness of the analysis.


PharmacoEconomics | 2006

Using Value of Information Analysis to Prioritise Health Research: Some Lessons from Recent UK Experience

Karl Claxton; Mark Sculpher

Decisions to adopt, reimburse or issue guidance on the use of health technologies are increasingly being informed by explicit cost-effectiveness analyses of the alternative interventions. Healthcare systems also invest heavily in research and development to support these decisions. However, the increasing transparency of adoption and reimbursement decisions, based on formal analysis, contrasts sharply with research prioritisation and commissioning. This is despite the fact that formal measures of the value of evidence generated by research are readily available.The results of two recent opportunities to apply value of information analysis to directly inform policy decisions about research priorities in the UK are presented. These include a pilot study for the UK National Co-ordinating Centre for Health Technology Assessment (NCCHTA) and a pilot study for the National Institute for Health and Clinical Excellence (NICE). We demonstrate how these results can be used to address a series of policy questions, including: is further research required to support the use of a technology and, if so, what type of research would be most valuable? We also show how the results can be used to address other questions such as, which patient subgroups should be included in subsequent research, which comparators and endpoints should be included, and what length of follow up would be most valuable.


International Journal of Technology Assessment in Health Care | 2001

Bayesian value-of-information analysis: An application to a policy model of Alzheimer's disease

Karl Claxton; Peter J. Neumann; Sally S. Araki; Milton C. Weinstein

A framework is presented that distinguishes the conceptually separate decisions of which treatment strategy is optimal from the question of whether more information is required to inform this choice in the future. The authors argue that the choice of treatment strategy should be based on expected utility, and the only valid reason to characterize the uncertainty surrounding outcomes of interest is to establish the value of acquiring additional information. A Bayesian decision theoretic approach is demonstrated through a probabilistic analysis of a published policy model of Alzheimers disease. The expected value of perfect information is estimated for the decision to adopt a new pharmaceutical for the population of patients with Alzheimers disease in the United States. This provides an upper bound on the value of additional research. The value of information is also estimated for each of the model inputs. This analysis can focus future research by identifying those parameters where more precise estimates would be most valuable and indicating whether an experimental design would be required. We also discuss how this type of analysis can also be used to design experimental research efficiently (identifying optimal sample size and optimal sample allocation) based on the marginal cost and marginal benefit of sample information. Value-of-information analysis can provide a measure of the expected payoff from proposed research, which can be used to set priorities in research and development. It can also inform an efficient regulatory framework for new healthcare technologies: an analysis of the value of information would define when a claim for a new technology should be deemed substantiated and when evidence should be considered competent and reliable when it is not cost-effective to gather any more information.


Health Policy | 2009

Methods for assessing the cost-effectiveness of public health interventions: Key challenges and recommendations

Helen Weatherly; Michael Drummond; Karl Claxton; Richard Cookson; Brian Ferguson; Christine Godfrey; Nigel Rice; Mark Sculpher; Amanda Sowden

RATIONALE Increasing attention is being given to the evaluation of public health interventions. Methods for the economic evaluation of clinical interventions are well established. In contrast, the economic evaluation of public health interventions raises additional methodological challenges. The paper identifies these challenges and provides suggestions for overcoming them. METHODS To identify the methodological challenges, five reviews that explored the economics of public health were consulted. From these, four main methodological challenges for the economic evaluation of public health interventions were identified. A review of empirical studies was conducted to explore how the methodological challenges had been approached in practice and an expert workshop convened to discuss how they could be tackled in the future. RESULTS The empirical review confirmed that the four methodological challenges were important. In all, 154 empirical studies were identified, covering areas as diverse as alcohol, drug use, obesity and physical activity, and smoking. However, the four methodological challenges were handled badly, or ignored in most of the studies reviewed. DISCUSSION The empirical review offered few insights into ways of addressing the methodological challenges. The expert workshop suggested a number of ways forward for overcoming the methodological challenges. CONCLUSION Although the existing empirical literature offers few insights on how to respond to these challenges, expert opinion suggests a number of ways forward. Much of what is suggested here has not yet been applied in practice, and there is an urgent need both for pilot studies and more methodological research.

Collaboration


Dive into the Karl Claxton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Nicholl

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A Rees

University of Sheffield

View shared research outputs
Researchain Logo
Decentralizing Knowledge