Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ae Ades is active.

Publication


Featured researches published by Ae Ades.


BMJ | 2005

Simultaneous comparison of multiple treatments: combining direct and indirect evidence.

Deborah M Caldwell; Ae Ades; Julian P. T. Higgins

How can policy makers decide which of five treatments is the best? Standard meta-analysis provides little help but evidence based decisions are possible Several possible treatments are often available to treat patients with the same condition. Decisions about optimal care, and the clinical practice guidelines that inform these decisions, rely on evidence based evaluation of the different treatment options.1 2 Systematic reviews and meta-analyses of randomised controlled trials are the main sources of evidence. However, most systematic reviews focus on pair-wise, direct comparisons of treatments (often with the comparator being a placebo or control group), which can make it difficult to determine the best treatment. In the absence of a collection of large, high quality, randomised trials comparing all eligible treatments (which is invariably the situation), we have to rely on indirect comparisons of multiple treatments. For example, an indirect estimate of the benefit of A over B can be obtained by comparing trials of A v C with trials of B v C,3–5 even though indirect comparisons produce relatively imprecise estimates.6 We describe comparisons of three or more treatments, based on pair-wise or multi-arm comparative studies, as a multiple treatment comparison evidence structure. ![][1] Angioplasty balloon device used to unblock and widen arteries Credit: WILL AND DENI McINTYRE/SPL Concerns have been expressed over the use of indirect comparisons of treatments.4 5 The Cochrane Collaborations guidance to authors states that indirect comparisons are not randomised, but are “observational studies across trials, and may suffer the biases of observational studies, for example confounding.”7 Some investigators believe that indirect comparisons may systematically overestimate the effects of treatments.3 When both indirect and direct comparisons are available, it has been recommended that the two approaches be considered separately and that direct comparisons should take precedence as a … [1]: /embed/graphic-1.gif


Journal of Clinical Epidemiology | 2011

Graphical methods and numerical summaries for presenting results from multiple-treatment meta-analysis: an overview and tutorial

Georgia Salanti; Ae Ades; John P. A. Ioannidis

OBJECTIVE To present some simple graphical and quantitative ways to assist interpretation and improve presentation of results from multiple-treatment meta-analysis (MTM). STUDY DESIGN AND SETTING We reanalyze a published network of trials comparing various antiplatelet interventions regarding the incidence of serious vascular events using Bayesian approaches for random effects MTM, and we explore the advantages and drawbacks of various traditional and new forms of quantitative displays and graphical presentations of results. RESULTS We present the results under various forms, conventionally based on the mean of the distribution of the effect sizes; based on predictions; based on ranking probabilities; and finally, based on probabilities to be within an acceptable range from a reference. We show how to obtain and present results on ranking of all treatments and how to appraise the overall ranks. CONCLUSIONS Bayesian methodology offers a multitude of ways to present results from MTM models, as it enables a natural and easy estimation of all measures based on probabilities, ranks, or predictions.


Statistical Methods in Medical Research | 2008

Evaluation of networks of randomized trials

Georgia Salanti; Julian P. T. Higgins; Ae Ades; John P. A. Ioannidis

Randomized trials may be designed and interpreted as single experiments or they may be seen in the context of other similar or relevant evidence. The amount and complexity of available randomized evidence vary for different topics. Systematic reviews may be useful in identifying gaps in the existing randomized evidence, pointing to discrepancies between trials, and planning future trials. A new, promising, but also very much debated extension of systematic reviews, mixed treatment comparison (MTC) meta-analysis, has become increasingly popular recently. MTC meta-analysis may have value in interpreting the available randomized evidence from networks of trials and can rank many different treatments, going beyond focusing on simple pairwise-comparisons. Nevertheless, the evaluation of networks also presents special challenges and caveats. In this article, we review the statistical methodology for MTC meta-analysis. We discuss the concept of inconsistency and methods that have been proposed to evaluate it as well as the methodological gaps that remain. We introduce the concepts of network geometry and asymmetry, and propose metrics for the evaluation of the asymmetry. Finally, we discuss the implications of inconsistency, network geometry and asymmetry in informing the planning of future trials.


Journal of the American Statistical Association | 2006

Assessing Evidence Inconsistency in Mixed Treatment Comparisons

Guobing Lu; Ae Ades

Randomized comparisons among several treatments give rise to an incomplete-blocks structure known as mixed treatment comparisons (MTCs). To analyze such data structures, it is crucial to assess whether the disparate evidence sources provide consistent information about the treatment contrasts. In this article we propose a general method for assessing evidence inconsistency in the framework of Bayesian hierarchical models. We begin with the distinction between basic parameters, which have prior distributions, and functional parameters, which are defined in terms of basic parameters. Based on a graphical analysis of MTC structures, evidence inconsistency is defined as a relation between a functional parameter and at least two basic parameters, supported by at least three evidence sources. The inconsistency degrees of freedom (ICDF) is the number of such inconsistencies. We represent evidence consistency as a set of linear relations between effect parameters on the log odds ratio scale, then relax these relations to allow for inconsistency by adding to the model random inconsistency factors (ICFs). The number of ICFs is determined by the ICDF. The overall consistency between evidence sources can be assessed by comparing models with and without ICFs, whereas their posterior distribution reflects the extent of inconsistency in particular evidence cycles. The methods are elucidated using two published datasets, implemented with standard Markov chain Monte Carlo software.


Research Synthesis Methods | 2012

Consistency and inconsistency in network meta-analysis: concepts and models for multi-arm studies

Julian P. T. Higgins; Dan Jackson; Jessica Kate Barrett; Guobing Lu; Ae Ades; Ian R. White

Meta-analyses that simultaneously compare multiple treatments (usually referred to as network meta-analyses or mixed treatment comparisons) are becoming increasingly common. An important component of a network meta-analysis is an assessment of the extent to which different sources of evidence are compatible, both substantively and statistically. A simple indirect comparison may be confounded if the studies involving one of the treatments of interest are fundamentally different from the studies involving the other treatment of interest. Here, we discuss methods for addressing inconsistency of evidence from comparative studies of different treatments. We define and review basic concepts of heterogeneity and inconsistency, and attempt to introduce a distinction between ‘loop inconsistency’ and ‘design inconsistency’. We then propose that the notion of design-by-treatment interaction provides a useful general framework for investigating inconsistency. In particular, using design-by-treatment interactions successfully addresses complications that arise from the presence of multi-arm trials in an evidence network. We show how the inconsistency model proposed by Lu and Ades is a restricted version of our full design-by-treatment interaction model and that there may be several distinct Lu–Ades models for any particular data set. We introduce novel graphical methods for depicting networks of evidence, clearly depicting multi-arm trials and illustrating where there is potential for inconsistency to arise. We apply various inconsistency models to data from trials of different comparisons among four smoking cessation interventions and show that models seeking to address loop inconsistency alone can run into problems. Copyright


Medical Decision Making | 2013

Evidence Synthesis for Decision Making 2: A Generalized Linear Modeling Framework for Pairwise and Network Meta-analysis of Randomized Controlled Trials

Sofia Dias; Alex J. Sutton; Ae Ades; Nicky J Welton

We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats.


Medical Decision Making | 2005

The Interpretation of Random-Effects Meta-Analysis in Decision Models

Ae Ades; Guobing Lu; Julian P. T. Higgins

This article shows that the interpretation of the random-effects models used in meta-analysis to summarize heterogeneous treatment effects can have a marked effect on the results from decision models. Sources of variation in meta-analysis include the following: random variation in outcome definition (amounting to a form of measurement error), variation between the patient groups in different trials, variation between protocols, and variation in the way a given protocol is implemented. Each of these alternatives leads to a different model for how the heterogeneity in the effect sizes previously observed might relate to the effect size(s) in a future implementation. Furthermore, these alternative models require different computations and, when the net benefits are nonlinear in the efficacy parameters, result in different expected net benefits. The authors’ analysis suggests that the mean treatment effect from a random-effects meta-analysis will only seldom be an appropriate representation of the efficacy expected in a future implementation. Instead, modelers should consider either the predictive distribution of a future treatment effect, or they should assume that the future implementation will result in a distribution of treatment effects. A worked example, in a probabilistic, Bayesian posterior framework, is used to illustrate the alternative computations and to show how parameter uncertainty can be combined with variation between individuals and heterogeneity in meta-analysis.


Medical Decision Making | 2004

Expected Value of Sample Information Calculations in Medical Decision Modeling

Ae Ades; G. Lu; Karl Claxton

There has been an increasing interest in using expected value of information (EVI) theory in medical decision making, to identify the need for further research to reduce uncertainty in decision and as a tool for sensitivity analysis. Expected value of sample information (EVSI) has been proposed for determination of optimum sample size and allocation rates in randomized clinical trials. This article derives simple Monte Carlo, or nested Monte Carlo, methods that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured. In particular, information on key decision parameters such as treatment efficacy are invariably available on measures of relative efficacy such as risk differences or odds ratios, but not on model parameters themselves. In addition, estimates of model parameters and of relative effect measures in the literature may be heterogeneous, reflecting additional sources of variation besides statistical sampling error. The authors describe Monte Carlo procedures for calculating EVSI for probability, rate, or continuous variable parameters in multi parameter decision models and approximate methods for relative measures such as risk differences, odds ratios, risk ratios, and hazard ratios. Where prior evidence is based on a random effects meta-analysis, the authors describe different ESVI calculations, one relevant for decisions concerning a specific patient group and the other for decisions concerning the entire population of patient groups. They also consider EVSI methods for new studies intended to update information on both baseline treatment efficacy and the relative efficacy of 2 treatments. Although there are restrictions regarding models with prior correlation between parameters, these methods can be applied to the majority of probabilistic decision models. Illustrative worked examples of EVSI calculations are given in an appendix.


PharmacoEconomics | 2006

Bayesian methods for evidence synthesis in cost-effectiveness analysis

Ae Ades; Mark Sculpher; Alex J. Sutton; Keith R. Abrams; Nicola J. Cooper; Nicky J Welton; Guobing Lu

Recently, health systems internationally have begun to use cost-effectiveness research as formal inputs into decisions about which interventions and programmes should be funded from collective resources. This process has raised some important methodological questions for this area of research. This paper considers one set of issues related to the synthesis of effectiveness evidence for use in decision-analytic cost-effectiveness (CE) models, namely the need for the synthesis of all sources of available evidence, although these may not ‘fit neatly’ into a CE model.Commonly encountered problems include the absence of head-to-head trial evidence comparing all options under comparison, the presence of multiple endpoints from trials and different follow-up periods. Full evidence synthesis for CE analysis also needs to consider treatment effects between patient subpopulations and the use of nonrandomised evidence.Bayesian statistical methods represent a valuable set of analytical tools to utilise indirect evidence and can make a powerful contribution to the decision-analytic approach to CE analysis. This paper provides a worked example and a general overview of these methods with particular emphasis on their use in economic evaluation.


PharmacoEconomics | 2008

Use of Indirect and Mixed Treatment Comparisons for Technology Assessment

Alex J. Sutton; Ae Ades; Nicola J. Cooper; Keith R. Abrams

Indirect and mixed treatment comparison (MTC) approaches to synthesis are logical extensions of more established meta-analysis methods. They have great potential for estimating the comparative effectiveness of multiple treatments using an evidence base of trials that individually do not compare all treatment options. Connected networks of evidence can be synthesized simultaneously to provide estimates of the comparative effectiveness of all included treatments and a ranking of their effectiveness with associated probability statements.The potential of the use of indirect and MTC methods in technology assessment is considerable, and would allow for a more consistent assessment than simpler alternative approaches. Although such models can be viewed as a logical and coherent extension of standard pair-wise meta-analysis, their increased complexity raises some unique issues with far-reaching implications concerning how we use data in technology assessment, while simultaneously raising searching questions about standard pair-wise meta-analysis. This article reviews pair-wise meta-analysis and indirect and MTC approaches to synthesis, clearly outlining the assumptions involved in each approach. It also raises the issues that the National Institute for Health and Clinical Excellence (NICE) needed to consider in updating their 2004 Guide to the Methods of Technology Appraisal, if such methods are to be used in their technology appraisals.

Collaboration


Dive into the Ae Ades's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Simms

Public Health England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge