Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dean Langan is active.

Publication


Featured researches published by Dean Langan.


Research Synthesis Methods | 2016

Methods to estimate the between-study variance and its uncertainty in meta-analysis

Areti Angeliki Veroniki; Dan Jackson; Wolfgang Viechtbauer; Ralf Bender; Jack Bowden; Guido Knapp; Oliver Kuss; Julian P. T. Higgins; Dean Langan; Georgia Salanti

Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios.


Journal of Clinical Epidemiology | 2012

Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis

Dean Langan; Julian P. T. Higgins; Walter Gregory; Alex J. Sutton

OBJECTIVE We aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis. STUDY DESIGN AND SETTING A number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered. RESULTS The statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically. CONCLUSION The additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration.


Research Synthesis Methods | 2015

An empirical comparison of heterogeneity variance estimators in 12 894 meta-analyses.

Dean Langan; Julian P. T. Higgins; Mark Simmonds

Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and Hartung and Makambi. We compared the impact of these five methods on the results of 12,894 meta-analyses extracted from the Cochrane Database of Systematic Reviews. We compared the methods in terms of the following: (1) the extent of heterogeneity, expressed as an I(2) statistic; (2) the overall effect estimate; (3) the precision of the overall effect estimate; and (4) p-values testing the no effect hypothesis. Results suggest that, in some meta-analyses, I(2) estimates differ by more than 50% when different heterogeneity estimators are used. Conclusions naively based on statistical significance (at a 5% level) were discordant for at least one pair of estimators in 7.5% of meta-analyses, indicating that the choice of heterogeneity estimator could affect the conclusions of a meta-analysis. These findings imply that using a single estimate of heterogeneity may lead to non-robust results in some meta-analyses, and researchers should consider using alternatives to the DerSimonian and Laird method.


Research Synthesis Methods | 2017

Comparative performance of heterogeneity variance estimators in meta-analysis: A review of simulation studies

Dean Langan; Julian P. T. Higgins; Mark Simmonds

Random-effects meta-analysis methods include an estimate of between-study heterogeneity variance. We present a systematic review of simulation studies comparing the performance of different estimation methods for this parameter. We summarise the performance of methods in relation to estimation of heterogeneity and of the overall effect estimate, and of confidence intervals for the latter. Among the twelve included simulation studies, the DerSimonian and Laird method was most commonly evaluated. This estimate is negatively biased when heterogeneity is moderate to high and therefore most studies recommended alternatives. The Paule-Mandel method was recommended by three studies: it is simple to implement, is less biased than DerSimonian and Laird and performs well in meta-analyses with dichotomous and continuous outcomes. In many of the included simulation studies, results were based on data that do not represent meta-analyses observed in practice, and only small selections of methods were compared. Furthermore, potential conflicts of interest were present when authors of novel methods interpreted their results. On the basis of current evidence, we provisionally recommend the Paule-Mandel method for estimating the heterogeneity variance, and using this estimate to calculate the mean effect and its 95% confidence interval. However, further simulation studies are required to draw firm conclusions. Copyright


Journal of Clinical Epidemiology | 2016

Missing data in randomized controlled trials testing palliative interventions pose a significant risk of bias and loss of power: a systematic review and meta-analyses.

Jamilla Hussain; Ian R. White; Dean Langan; Miriam Johnson; David Torgerson; Martin Bland

Objectives To assess the risk posed by missing data (MD) to the power and validity of trials evaluating palliative interventions. Study Design and Setting A systematic review of MD in published randomized controlled trials (RCTs) of palliative interventions in participants with life-limiting illnesses was conducted, and random-effects meta-analyses and metaregression were performed. CENTRAL, MEDLINE, and EMBASE (2009–2014) were searched with no language restrictions. Results One hundred and eight RCTs representing 15,560 patients were included. The weighted estimate for MD at the primary endpoint was 23.1% (95% confidence interval [CI] 19.3, 27.4). Larger MD proportions were associated with increasing numbers of questions/tests requested (odds ratio [OR], 1.19; 95% CI 1.05, 1.35) and with longer study duration (OR, 1.09; 95% CI 1.02, 1.17). Meta-analysis found evidence of differential rates of MD between trial arms, which varied in direction (OR, 1.04; 95% CI 0.90, 1.20; I2 35.9, P = 0.001). Despite randomization, MD in the intervention arms (vs. control) were more likely to be attributed to disease progression unrelated to the intervention (OR, 1.31; 95% CI 1.02, 1.69). This was not the case for MD due to death (OR, 0.92; 95% CI 0.78, 1.08). Conclusion The overall proportion and differential rates and reasons for MD reduce the power and potentially introduce bias to palliative care trials.


Research Synthesis Methods | 2018

A comparison of heterogeneity variance estimators in simulated random-effects meta-analyses

Dean Langan; Julian P. T. Higgins; Dan Jackson; Jack Bowden; Areti Angeliki Veroniki; Evangelos Kontopantelis; Wolfgang Viechtbauer; Mark Simmonds

Studies combined in a meta-analysis often have differences in their design and conduct that can lead to heterogeneous results. A random-effects model accounts for these differences in the underlying study effects, which includes a heterogeneity variance parameter. The DerSimonian-Laird method is often used to estimate the heterogeneity variance, but simulation studies have found the method can be biased and other methods are available. This paper compares the properties of nine different heterogeneity variance estimators using simulated meta-analysis data. Simulated scenarios include studies of equal size and of moderate and large differences in size. Results confirm that the DerSimonian-Laird estimator is negatively biased in scenarios with small studies and in scenarios with a rare binary outcome. Results also show the Paule-Mandel method has considerable positive bias in meta-analyses with large differences in study size. We recommend the method of restricted maximum likelihood (REML) to estimate the heterogeneity variance over other methods. However, considering that meta-analyses of health studies typically contain few studies, the heterogeneity variance estimate should not be used as a reliable gauge for the extent of heterogeneity in a meta-analysis. The estimated summary effect of the meta-analysis and its confidence interval derived from the Hartung-Knapp-Sidik-Jonkman method are more robust to changes in the heterogeneity variance estimate and show minimal deviation from the nominal coverage of 95% under most of our simulated scenarios.


Journal of Clinical Epidemiology | 2017

Quality of missing data reporting and handling in palliative care trials demonstrates that further development of the CONSORT statement is required : a systematic review

Jamilla Hussain; Martin Bland; Dean Langan; Miriam Johnson; Ian R. White

Objectives Assess (i) the quality of reporting and handling of missing data (MD) in palliative care trials, (ii) whether there are differences in the reporting of criteria specified by the Consolidated Standards of Reporting Trials (CONSORT) 2010 statement compared with those not specified, and (iii) the association of the reporting of MD with journal impact factor and CONSORT endorsement status. Study Design and Setting Systematic review of palliative care randomized controlled trials. CENTRAL, MEDLINE, and EMBASE (2009–2014) were searched. Results One hundred and eight trials (15,560 participants) were included. MD was incompletely reported and not handled in accordance with current guidance. Reporting criteria specified by the CONSORT statement were better reported than those not specified (participant flow, 69%; number of participants not included in the primary outcome analysis, 94%; and the reason for MD, 71%). However, MD in items contributing to scale summaries (10%) and secondary outcomes (9%) were poorly reported, so the proportion of MD stated is likely to be an underestimate. The reason for MD provided was unclear for 54% of participants and only 16% of trials with MD reported a MD sensitivity analysis. The odds of reporting most of the MD and other risk of bias reporting criteria were increased as the journal impact factor increased and in journals that endorsed the CONSORT statement. Conclusion Further development of the CONSORT MD reporting guidance is likely to improve the quality of reporting. Reporting recommendations are provided.


Trials | 2011

Maximising adherence to study protocol within pharmaco-rehabilitation clinical trials

Suzanne Hartley; Sharon P Ruddock; Bipin Bhakta; John Pearn; Lorna Barnard; Alison Fergusson; Dean Langan; Amanda Farrin

Background The Dopamine Augmented Rehabilitation in Stroke (DARS) trial is a double-blind placebo controlled trial investigating impact of co-careldopa/placebo in combination with routine NHS occupational/physical therapy on functional outcome in people with acute stroke. The trial involves participants taking Investigational Medicinal Product (IMP)/placebo prior to therapy sessions while in the acute stroke unit and following hospital discharge Stroke survivors may have significant residual impairments such as weakness, aphasia, visual disturbance, cognitive problems and mood disorders which may affect their ability to comply with DARS medication/ therapy schedule.


Stata Journal | 2012

Graphical augmentations to the funnel plot to assess the impact of a new study on an existing meta-analysis

Michael J. Crowther; Dean Langan; Alex J. Sutton


Journal of Clinical Epidemiology | 2015

Missing data in randomised controlled trials testing palliative interventions pose a significant risk of bias and loss of power

Jamilla Hussain; Ian R. White; Dean Langan; Miriam Johnson; David Torgerson; Martin Bland

Collaboration


Dive into the Dean Langan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian R. White

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miriam Johnson

Hull York Medical School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Jackson

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge