Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julian P. T. Higgins is active.

Publication


Featured researches published by Julian P. T. Higgins.


BMJ | 2003

Measuring inconsistency in meta-analyses

Julian P. T. Higgins; Simon G. Thompson; Jonathan J Deeks; Douglas G. Altman

Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …


Archive | 2008

Cochrane handbook for systematic reviews of interventions

Julian P. T. Higgins; Sally Green

The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.


BMJ | 2011

The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials

Julian P. T. Higgins; Douglas G. Altman; Peter C Gøtzsche; Peter Jüni; David Moher; Andrew D Oxman; Jelena Savovic; Kenneth F. Schulz; Laura Weeks; Jonathan A C Sterne

Flaws in the design, conduct, analysis, and reporting of randomised trials can cause the effect of an intervention to be underestimated or overestimated. The Cochrane Collaboration’s tool for assessing risk of bias aims to make the process clearer and more accurate


BMJ | 2011

Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials.

Jonathan A C Sterne; Alex J. Sutton; John P. A. Ioannidis; Norma Terrin; David R. Jones; Joseph Lau; James Carpenter; Gerta Rücker; Roger Harbord; Christopher H. Schmid; Jennifer Tetzlaff; Jonathan J Deeks; Jaime Peters; Petra Macaskill; Guido Schwarzer; Sue Duval; Douglas G. Altman; David Moher; Julian P. T. Higgins

Funnel plots, and tests for funnel plot asymmetry, have been widely used to examine bias in the results of meta-analyses. Funnel plot asymmetry should not be equated with publication bias, because it has a number of other possible causes. This article describes how to interpret funnel plot asymmetry, recommends appropriate tests, and explains the implications for choice of meta-analysis model


The Lancet | 2009

Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis

Andrea Cipriani; Toshiaki A. Furukawa; Georgia Salanti; John Geddes; Julian P. T. Higgins; Rachel Churchill; Norio Watanabe; Atsuo Nakagawa; Ichiro M Omori; Hugh McGuire; Michele Tansella; Corrado Barbui

BACKGROUND Conventional meta-analyses have shown inconsistent results for efficacy of second-generation antidepressants. We therefore did a multiple-treatments meta-analysis, which accounts for both direct and indirect comparisons, to assess the effects of 12 new-generation antidepressants on major depression. METHODS We systematically reviewed 117 randomised controlled trials (25 928 participants) from 1991 up to Nov 30, 2007, which compared any of the following antidepressants at therapeutic dose range for the acute treatment of unipolar major depression in adults: bupropion, citalopram, duloxetine, escitalopram, fluoxetine, fluvoxamine, milnacipran, mirtazapine, paroxetine, reboxetine, sertraline, and venlafaxine. The main outcomes were the proportion of patients who responded to or dropped out of the allocated treatment. Analysis was done on an intention-to-treat basis. FINDINGS Mirtazapine, escitalopram, venlafaxine, and sertraline were significantly more efficacious than duloxetine (odds ratios [OR] 1.39, 1.33, 1.30 and 1.27, respectively), fluoxetine (1.37, 1.32, 1.28, and 1.25, respectively), fluvoxamine (1.41, 1.35, 1.30, and 1.27, respectively), paroxetine (1.35, 1.30, 1.27, and 1.22, respectively), and reboxetine (2.03, 1.95, 1.89, and 1.85, respectively). Reboxetine was significantly less efficacious than all the other antidepressants tested. Escitalopram and sertraline showed the best profile of acceptability, leading to significantly fewer discontinuations than did duloxetine, fluvoxamine, paroxetine, reboxetine, and venlafaxine. INTERPRETATION Clinically important differences exist between commonly prescribed antidepressants for both efficacy and acceptability in favour of escitalopram and sertraline. Sertraline might be the best choice when starting treatment for moderate to severe major depression in adults because it has the most favourable balance between benefits, acceptability, and acquisition cost.


BMJ | 2005

Simultaneous comparison of multiple treatments: combining direct and indirect evidence.

Deborah M Caldwell; Ae Ades; Julian P. T. Higgins

How can policy makers decide which of five treatments is the best? Standard meta-analysis provides little help but evidence based decisions are possible Several possible treatments are often available to treat patients with the same condition. Decisions about optimal care, and the clinical practice guidelines that inform these decisions, rely on evidence based evaluation of the different treatment options.1 2 Systematic reviews and meta-analyses of randomised controlled trials are the main sources of evidence. However, most systematic reviews focus on pair-wise, direct comparisons of treatments (often with the comparator being a placebo or control group), which can make it difficult to determine the best treatment. In the absence of a collection of large, high quality, randomised trials comparing all eligible treatments (which is invariably the situation), we have to rely on indirect comparisons of multiple treatments. For example, an indirect estimate of the benefit of A over B can be obtained by comparing trials of A v C with trials of B v C,3–5 even though indirect comparisons produce relatively imprecise estimates.6 We describe comparisons of three or more treatments, based on pair-wise or multi-arm comparative studies, as a multiple treatment comparison evidence structure. ![][1] Angioplasty balloon device used to unblock and widen arteries Credit: WILL AND DENI McINTYRE/SPL Concerns have been expressed over the use of indirect comparisons of treatments.4 5 The Cochrane Collaborations guidance to authors states that indirect comparisons are not randomised, but are “observational studies across trials, and may suffer the biases of observational studies, for example confounding.”7 Some investigators believe that indirect comparisons may systematically overestimate the effects of treatments.3 When both indirect and direct comparisons are available, it has been recommended that the two approaches be considered separately and that direct comparisons should take precedence as a … [1]: /embed/graphic-1.gif


Research Synthesis Methods | 2010

A basic introduction to fixed‐effect and random‐effects models for meta‐analysis

Michael Borenstein; Larry V. Hedges; Julian P. T. Higgins; Hannah R. Rothstein

There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright


Journal of The Royal Statistical Society Series A-statistics in Society | 2009

A re-evaluation of random-effects meta-analysis

Julian P. T. Higgins; Simon G. Thompson; David J. Spiegelhalter

Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of ‘set shifting’ ability in people with eating disorders.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 7. Rating the quality of evidence--inconsistency

Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; James Woodcock; Jan Brozek; Mark Helfand; Pablo Alonso-Coello; Paul Glasziou; Roman Jaeschke; Elie A. Akl; Susan L. Norris; Gunn Elisabeth Vist; Philipp Dahm; Vijay K. Shukla; Julian P. T. Higgins; Yngve Falck-Ytter; Holger J. Schünemann

This article deals with inconsistency of relative (rather than absolute) treatment effects in binary/dichotomous outcomes. A body of evidence is not rated up in quality if studies yield consistent results, but may be rated down in quality if inconsistent. Criteria for evaluating consistency include similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity and I(2). To explore heterogeneity, systematic review authors should generate and test a small number of a priori hypotheses related to patients, interventions, outcomes, and methodology. When inconsistency is large and unexplained, rating down quality for inconsistency is appropriate, particularly if some studies suggest substantial benefit, and others no effect or harm (rather than only large vs. small effects). Apparent subgroup effects may be spurious. Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction; subgroup comparisons come from within rather than between studies; tests of interaction generate low P-values; and have a biological rationale.


BMJ | 2011

Interpretation of random effects meta-analyses

Richard D Riley; Julian P. T. Higgins; Jonathan J Deeks

Summary estimates of treatment effect from random effects meta-analysis give only the average effect across all studies. Inclusion of prediction intervals, which estimate the likely effect in an individual setting, could make it easier to apply the results to clinical practice

Collaboration


Dive into the Julian P. T. Higgins's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Borenstein

Long Island Jewish Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge