Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joie Ensor is active.

Publication


Featured researches published by Joie Ensor.


BMJ | 2016

External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis : opportunities and challenges

Richard D Riley; Joie Ensor; Kym Ie Snell; Thomas P. A. Debray; Doug Altman; Karel G.M. Moons; Gary S. Collins

Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.


Statistics in Medicine | 2017

Meta‐analysis using individual participant data: one‐stage and two‐stage approaches, and why they may differ

Danielle L. Burke; Joie Ensor; Richard D Riley

Meta‐analysis using individual participant data (IPD) obtains and synthesises the raw, participant‐level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta‐analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual‐level interactions, such as treatment‐effect modifiers. There are two statistical approaches for conducting an IPD meta‐analysis: one‐stage and two‐stage. The one‐stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two‐stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta‐analysis model. There have been numerous comparisons of the one‐stage and two‐stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one‐stage and two‐stage IPD meta‐analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one‐stage or two‐stage itself. We illustrate the concepts with recently published IPD meta‐analyses, summarise key statistical software and provide recommendations for future IPD meta‐analyses.


BMJ | 2017

A guide to systematic review and meta-analysis of prediction model performance

Thomas P. A. Debray; Johanna A A G Damen; Kym Ie Snell; Joie Ensor; Lotty Hooft; Johannes B. Reitsma; Richard D Riley; Karel G.M. Moons

Validation of prediction models is highly recommended and increasingly common in the literature. A systematic review of validation studies is therefore helpful, with meta-analysis needed to summarise the predictive performance of the model being validated across different settings and populations. This article provides guidance for researchers systematically reviewing and meta-analysing the existing evidence on a specific prediction model, discusses good practice when quantitatively summarising the predictive performance of the model across studies, and provides recommendations for interpreting meta-analysis estimates of model performance. We present key steps of the meta-analysis and illustrate each step in an example review, by summarising the discrimination and calibration performance of the EuroSCORE for predicting operative mortality in patients undergoing coronary artery bypass grafting.


Journal of biometrics & biostatistics | 2014

Meta-Analysis of Test Accuracy Studies with Multiple and Missing Thresholds: A Multivariate-Normal Model

Richard D Riley; Yemisi Takwoingi; Thomas Trikalinos; Apratim Guha; Atanu Biswas; Joie Ensor; R. Katie Morris; Jonathan J. Deeks

Background: When meta-analysing studies examining the diagnostic/predictive accuracy of classifications based on a continuous test, each study may provide results for one or more thresholds, which can vary across studies. Researchers typically meta-analyse each threshold independently. We consider a multivariate meta-analysis to synthesise results for all thresholds simultaneously and account for their correlation. Methods: We assume that the logit sensitivity and logit specificity estimates follow a multivariate-normal distribution within studies. We model the true logit sensitivity (logit specificity) as monotonically decreasing (increasing) functions of the continuous threshold. This produces a summary ROC curve, a summary estimate of sensitivity and specificity for each threshold, and reveals the heterogeneity in test accuracy across studies. Application is made to 13 studies of protein:creatinine ratio (PCR) for detecting significant proteinuria in pregnancy that each report up to nine thresholds, with 23 distinct thresholds across studies. Results: In the example there were large within-study and between-study correlations, which were accounted for by the method. A cubic relationship on the logit scale was a better fit for the summary ROC curve than a linear or quadratic one. Between-study heterogeneity was substantial. Based on the summary ROC curve, a PCR value of 0.30 to 0.35 corresponded to maximal pair of summary sensitivity and specificity. Limitations of the proposed model include the need to posit parametric functions for the relationship of sensitivity and specificity with the threshold, to ensure correct ordering of summary threshold results, and the multivariate-normal approximation to the within-study sampling distribution. Conclusion: The joint analysis of test performance data reported over multiple thresholds is feasible. The proposed approach handles different sets of available thresholds per study, and produces a summary ROC curve and summary results for each threshold to inform decision-making.


BMJ Open | 2016

Systematic review of prognostic models for recurrent venous thromboembolism (VTE) post-treatment of first unprovoked VTE

Joie Ensor; Richard D Riley; David Moore; Kym Ie Snell; Susan Bayliss; David Fitzmaurice

Objectives To review studies developing or validating a prognostic model for individual venous thromboembolism (VTE) recurrence risk following cessation of therapy for a first unprovoked VTE. Prediction of recurrence risk is crucial to informing patient prognosis and treatment decisions. The review aims to determine whether reliable prognostic models exist and, if not, what further research is needed within the field. Design Bibliographic databases (including MEDLINE, EMBASE and the Cochrane Library) were searched using index terms relating to the clinical field and prognosis. Screening of titles, abstracts and subsequently full texts was conducted by 2 reviewers independently using predefined criteria. Quality assessment and critical appraisal of included full texts was based on an early version of the PROBAST (Prediction study Risk Of Bias Assessment Tool) for risk of bias and applicability in prognostic model studies. Setting Studies in any setting were included. Primary and secondary outcome measures The primary outcome for the review was the predictive accuracy of identified prognostic models in relation to VTE recurrence risk. Results 3 unique prognostic models were identified including the HERDOO2 score, Vienna prediction model and DASH score. Quality assessment highlighted the Vienna, and DASH models were developed with generally strong methodology, but the HERDOO2 model had many methodological concerns. Further, all models were considered at least at moderate risk of bias, primarily due to the need for further external validation before use in practice. Conclusions Although the Vienna model shows the most promise (based on strong development methodology, applicability and having some external validation), none of the models can be considered ready for use until further, external and robust validation is performed in new data. Any new models should consider the inclusion of predictors found to be consistently important in existing models (sex, site of index event, D-dimer), and take heed of several methodological issues identified through this review. PROSPERO registration number CRD42013003494.


Journal of Clinical Epidemiology | 2016

Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model

Kym Ie Snell; Harry Hua; Thomas P. A. Debray; Joie Ensor; Maxime P. Look; Karel G.M. Moons; Richard D Riley

Objectives Our aim was to improve meta-analysis methods for summarizing a prediction models performance when individual participant data are available from multiple studies for external validation. Study Design and Setting We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction models average performance, the heterogeneity in performance across populations, and the probability of “good” performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. Results In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the models intercept (baseline hazard) is recalibrated. For the cancer model, the probability of “good” performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of “good” performance. Conclusion Multivariate meta-analysis can be used to externally validate a prediction models calibration and discrimination performance across multiple populations and to evaluate different implementation strategies.


Systematic Reviews | 2015

Meta-analysis of test accuracy studies: an exploratory method for investigating the impact of missing thresholds

Richard D Riley; Ikhlaaq Ahmed; Joie Ensor; Yemisi Takwoingi; Amanda J Kirkham; R. Katie Morris; J. Pieter Noordzij; Jonathan J Deeks

BackgroundPrimary studies examining the accuracy of a continuous test evaluate its sensitivity and specificity at one or more thresholds. Meta-analysts then usually perform a separate meta-analysis for each threshold. However, the number of studies available for each threshold is often very different, as primary studies are inconsistent in the thresholds reported. Furthermore, of concern is selective reporting bias, because primary studies may be less likely to report a threshold when it gives low sensitivity and/or specificity estimates. This may lead to biased meta-analysis results. We developed an exploratory method to examine the potential impact of missing thresholds on conclusions from a test accuracy meta-analysis.MethodsOur method identifies studies that contain missing thresholds bounded between a pair of higher and lower thresholds for which results are available. The bounded missing threshold results (two-by-two tables) are then imputed, by assuming a linear relationship between threshold value and each of logit-sensitivity and logit-specificity. The imputed results are then added to the meta-analysis, to ascertain if original conclusions are robust. The method is evaluated through simulation, and application made to 13 studies evaluating protein:creatinine ratio (PCR) for detecting proteinuria in pregnancy with 23 different thresholds, ranging from one to seven per study.ResultsThe simulation shows the imputation method leads to meta-analysis estimates with smaller mean-square error. In the PCR application, it provides 50 additional results for meta-analysis and their inclusion produces lower test accuracy results than originally identified. For example, at a PCR threshold of 0.16, the summary specificity is 0.80 when using the original data, but 0.66 when also including the imputed data. At a PCR threshold of 0.25, the summary sensitivity is reduced from 0.95 to 0.85 when additionally including the imputed data.ConclusionsThe imputation method is a practical tool for researchers (often non-statisticians) to explore the potential impact of missing threshold results on their meta-analysis conclusions. Software is available to implement the method. In the PCR example, it revealed threshold results are vulnerable to the missing data, and so stimulates the need for advanced statistical models or, preferably, individual patient data from primary studies.


Statistics in Medicine | 2017

One‐stage individual participant data meta‐analysis models: estimation of treatment‐covariate interactions must avoid ecological bias by separating out within‐trial and across‐trial information

Hairui Hua; Danielle L. Burke; Michael J. Crowther; Joie Ensor; Catrin Tudur Smith; Richard D Riley

Stratified medicine utilizes individual‐level covariates that are associated with a differential treatment effect, also known as treatment‐covariate interactions. When multiple trials are available, meta‐analysis is used to help detect true treatment‐covariate interactions by combining their data. Meta‐regression of trial‐level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta‐analyses are preferable to examine interactions utilizing individual‐level information. However, one‐stage IPD models are often wrongly specified, such that interactions are based on amalgamating within‐ and across‐trial information. We compare, through simulations and an applied example, fixed‐effect and random‐effects models for a one‐stage IPD meta‐analysis of time‐to‐event data where the goal is to estimate a treatment‐covariate interaction. We show that it is crucial to centre patient‐level covariates by their mean value in each trial, in order to separate out within‐trial and across‐trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta‐analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is −0.011 (95% CI: −0.019 to −0.003; p = 0.004), and thus highly significant, when amalgamating within‐trial and across‐trial information. However, when separating within‐trial from across‐trial information, the interaction is −0.007 (95% CI: −0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta‐analysts should only use within‐trial information to examine individual predictors of treatment effect and that one‐stage IPD models should separate within‐trial from across‐trial information to avoid ecological bias.


Systematic Reviews | 2014

Methodological issues and recommendations for systematic reviews of prognostic studies: an example from cardiovascular disease

Janine Dretzke; Joie Ensor; Susan Bayliss; James Hodgkinson; Marie Lordkipanidzé; Richard D Riley; David Fitzmaurice; David Moore

BackgroundPrognostic factors are associated with the risk of future health outcomes in individuals with a particular health condition. The prognostic ability of such factors is increasingly being assessed in both primary research and systematic reviews. Systematic review methodology in this area is continuing to evolve, reflected in variable approaches to key methodological aspects. The aim of this article was to (i) explore and compare the methodology of systematic reviews of prognostic factors undertaken for the same clinical question, (ii) to discuss implications for review findings, and (iii) to present recommendations on what might be considered to be ‘good practice’ approaches.MethodsThe sample was comprised of eight systematic reviews addressing the same clinical question, namely whether ‘aspirin resistance’ (a potential prognostic factor) has prognostic utility relative to future vascular events in patients on aspirin therapy for secondary prevention. A detailed comparison of methods around study identification, study selection, quality assessment, approaches to analysis, and reporting of findings was undertaken and the implications discussed. These were summarised into key considerations that may be transferable to future systematic reviews of prognostic factors.ResultsAcross systematic reviews addressing the same clinical question, there were considerable differences in the numbers of studies identified and overlap between included studies, which could only partially be explained by different study eligibility criteria. Incomplete reporting and differences in terminology within primary studies hampered study identification and selection process across reviews. Quality assessment was highly variable and only one systematic review considered a checklist for studies of prognostic questions. There was inconsistency between reviews in approaches towards analysis, synthesis, addressing heterogeneity and reporting of results.ConclusionsDifferent methodological approaches may ultimately affect the findings and interpretation of systematic reviews of prognostic research, with implications for clinical decision-making.


Systematic Reviews | 2013

Protocol for a systematic review of prognostic models for the recurrence of venous thromboembolism (VTE) following treatment for a first unprovoked VTE

Joie Ensor; Richard D Riley; David Moore; Susan Bayliss; Sue Jowett; David Fitzmaurice

BackgroundVenous thromboembolism (VTE) is a chronic disease, with fatal recurrences occurring in 5% to 9% of patients, yet it is also one of the best examples of preventable disease. Prognostic models that utilise multiple prognostic factors (demographic, clinical and laboratory patient characteristics) in combination to predict individual outcome risk may allow the identification of patients who would benefit from long-term anticoagulation therapy, and conversely those that would benefit from stopping such therapy due to a low risk of recurrence. The study will systematically review the evidence on potential prognostic models for the recurrence of VTE or adverse outcomes following the cessation of therapy, and synthesise and summarise each model’s prognostic value. The review has been registered with PROSPERO (CRD42013003494).Methods/designArticles will be sought from the Cochrane library (CENTRAL, CDSR, DARE, HTA databases), MEDLINE and EMBASE. Trial registers will be searched for ongoing studies, and conference abstracts will be sought. Reference lists and subject experts will be utilised. No restrictions on language of publications will be applied. Studies of any design will be included if they examine, in patients ceasing therapy after at least three months’ treatment with an oral anticoagulant therapy, whether more than one factor in combination is associated with the risk of VTE recurrence or another adverse outcome. Study quality will be assessed using appropriate guidelines for prognostic models. Prognostic models will be summarised qualitatively and, if tested in multiple validation studies, their predictive performance will be summarised using a random-effects meta-analysis model to account for any between-study heterogeneity.DiscussionThe results of the review will identify prognostic models for the risk of VTE recurrence or adverse outcome following cessation of therapy for a first unprovoked VTE. These will be informative for clinicians currently treating patients for a first unprovoked VTE and considering whether to stop treatment or not for particular individuals. The conclusions of the review will also inform the potential development of new prognostic models and clinical prediction rules to identify those at high or low risk of VTE recurrence or adverse outcome following a first unprovoked VTE.

Collaboration


Dive into the Joie Ensor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moore

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Susan Bayliss

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susan Jowett

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janine Dretzke

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge