Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joris A. H. de Groot is active.

Publication


Featured researches published by Joris A. H. de Groot.


BMC Medical Research Methodology | 2014

External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

Gary S. Collins; Joris A. H. de Groot; Susan Dutton; Omar Omar; Milensu Shanyinde; Abdelouahid Tajar; Merryn Voysey; Rose Wharton; Ly-Mee Yu; Karel G.M. Moons; Douglas G. Altman

BackgroundBefore considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models.MethodsWe conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures.Results11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models.ConclusionsThe vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling and acknowledgement of missing data and one of the most key performance measures of prediction models i.e. calibration often omitted from the publication. It may therefore not be surprising that an overwhelming majority of developed prediction models are not used in practice, when there is a dearth of well-conducted and clearly reported (external validation) studies describing their performance on independent participant data.


BMJ | 2011

Verification problems in diagnostic accuracy studies: consequences and solutions

Joris A. H. de Groot; Patrick M. Bossuyt; Johannes B. Reitsma; Anne Wilhelmina Saskia Rutjes; Nandini Dendukuri; Kristel J.M. Janssen; Karel G.M. Moons

The accuracy of a diagnostic test or combination of tests (such as in a diagnostic model) is the ability to correctly identify patients with or without the target disease. In studies of diagnostic accuracy, the results of the test or model under study are verified by comparing them with results of a reference standard, applied to the same patients, to verify disease status (see first panel in figure⇓).1 Measures such as predictive values, post-test probabilities, ROC (receiver operating characteristics) curves, sensitivity, specificity, likelihood ratios, and odds ratios express how well the results of an index test agree with the outcome of the reference standard.2 Biased and exaggerated estimates of diagnostic accuracy can lead to inefficiencies in diagnostic testing in practice, unnecessary costs, and physicians making incorrect treatment decisions. Diagnostic accuracy studies with ( a ) complete verification by the same reference standard, ( b ) partial verification, or ( c )differential verification The reference standard ideally provides error-free classification of the disease outcome presence or absence. In some cases, it is not possible to verify the definitive presence or absence of disease in all patients with the (single) reference standard, which may result in bias. In this paper, we describe the most important types of disease verification problems using examples from published diagnostic accuracy studies. We also propose solutions to alleviate the associated biases. Often not all study subjects who undergo the index test receive the reference standard, leading to missing data on disease outcome (see middle panel in figure⇑). The bias associated with such situations of partial verification is known as partial verification bias, work-up bias, or referral bias.3 4 5 ### Clinical examples of partial verification Various mechanisms can lead to partial verification (see examples in table 1⇓). View this table: Table 1  Examples of diagnostic accuracy studies with problems in disease verification When the condition of interest …


BMJ | 2013

Value of composite reference standards in diagnostic research

Christiana A. Naaktgeboren; Loes C. M. Bertens; Maarten van Smeden; Joris A. H. de Groot; Karel G.M. Moons; Johannes B. Reitsma

Combining several tests is a common way to improve the final classification of disease status in diagnostic accuracy studies but is often used ambiguously. This article gives advice on proper use and reporting of composite reference standards


Heart | 2013

Hybrid myocardial perfusion SPECT/CT coronary angiography and invasive coronary angiography in patients with stable angina pectoris lead to similar treatment decisions

Jeroen Schaap; Joris A. H. de Groot; Koen Nieman; W. Bob Meijboom; S. Matthijs Boekholdt; Martijn C. Post; Jan Van der Heyden; Thom L. de Kroon; Benno J. Rensing; Karel G.M. Moons; J. Fred Verzijlbergen

Objectives To evaluate to what extent treatment decisions for patients with stable angina pectoris can be made based on hybrid myocardial perfusion single-photon emission CT (SPECT) and CT coronary angiography (CCTA). It has been shown that hybrid SPECT/CCTA has good performance in the diagnosis of significant coronary artery disease (CAD). The question remains whether these imaging results lead to similar treatment decisions as compared to standalone SPECT and invasive coronary angiography (CA). Methods We prospectively included 107 patients (mean age 62.8±10.0 years, 69% male) with stable anginal complaints and an intermediate to high pre-test likelihood for CAD. Hybrid SPECT/CCTA was performed prior to CA in all patients. The study outcome was the treatment decision categorised as: no revascularisation, percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG). Treatment decisions were made by two interventional cardiologists and one cardiothoracic surgeon in two steps: first, based on the results of hybrid SPECT/CCTA; second, based on SPECT and CA. Results Revascularisation (PCI or CABG) was indicated in 54 (50%) patients based on SPECT and CA. Percentage agreement of treatment decisions in all patients based on hybrid SPECT/CCTA versus SPECT and CA on the necessity of revascularisation was 92%. Percentage agreement of treatment decisions in patients with matched, unmatched and normal hybrid SPECT/CCTA findings was 95%, 84% and 100%, respectively. Conclusions Panel evaluation shows that patients could be accurately indicated for and deferred from revascularisation based on hybrid SPECT/CCTA.


JAMA | 2017

Effect of Fibrinogen Concentrate on Intraoperative Blood Loss Among Patients With Intraoperative Bleeding During High-Risk Cardiac Surgery: A Randomized Clinical Trial

Süleyman Bilecen; Joris A. H. de Groot; Cor J. Kalkman; Alexander J. Spanjersberg; George J. Brandon Bravo Bruinsma; Karel G.M. Moons; Arno P. Nierich

Importance Fibrinogen concentrate might partly restore coagulation defects and reduce intraoperative bleeding. Objective To determine whether fibrinogen concentrate infusion dosed to achieve a plasma fibrinogen level of 2.5 g/L in high-risk cardiac surgery patients with intraoperative bleeding reduces intraoperative blood loss. Design, Setting, and Participants A randomized, placebo-controlled, double-blind clinical trial conducted in Isala Zwolle, the Netherlands (February 2011-January 2015), involving patients undergoing elective, high-risk cardiac surgery (ie, combined coronary artery bypass graft [CABG] surgery and valve repair or replacement surgery, the replacement of multiple valves, aortic root reconstruction, or reconstruction of the ascending aorta or aortic arch) with intraoperative bleeding (blood volume between 60 and 250 mL suctioned from the thoracic cavity in a period of 5 minutes) were randomized to receive either fibrinogen concentrate or placebo. Interventions Intravenous, single-dose administration of fibrinogen concentrate (n = 60) or placebo (n = 60), targeted to achieve a postinfusion plasma fibrinogen level of 2.5 g/L. Main Outcomes and Measures The primary outcome was blood loss in milliliters between intervention (ie, after removal of cardiopulmonary bypass) and closure of chest. Safety variables (within 30 days) included: in-hospital mortality, myocardial infarction, cerebrovascular accident or transient ischemic attack, renal insufficiency or failure, venous thromboembolism, pulmonary embolism, and operative complications. Results Among 120 patients (mean age; 71 [SD, 10] years, 37 women [31%]) included in the study, combined CABG and valve repair or replacement surgery comprised 72% of procedures and had a mean (SD) cardiopulmonary bypass time of 200 minutes (83) minutes. For the primary outcome, median blood loss in the fibrinogen group was 50 mL (interquartile range [IQR], 29-100 mL) compared with 70 mL (IQR, 33-145 mL) in the control group (P = .19), the absolute difference 20 mL (95% CI, −13 to 35 mL). There were 6 cases of stroke or transient ischemic attack (4 in the fibrinogen group); 4 myocardial infarctions (3 in the fibrinogen group); 2 deaths (both in the fibrinogen group); 5 cases with renal insufficiency or failure (3 in the fibrinogen group); and 9 cases with reoperative thoracotomy (4 in the fibrinogen group). Conclusions and Relevance Among patients with intraoperative bleeding during high-risk cardiac surgery, administration of fibrinogen concentrate, compared with placebo, resulted in no significant difference in the amount of intraoperative blood loss. Trial Registration clinicaltrials.gov Identifier: NCT01124981 and EudraCT No: 2009-018086-12


Epidemiology | 2011

Adjusting for Differential-verification Bias in Diagnostic-accuracy Studies A Bayesian Approach

Joris A. H. de Groot; Nandini Dendukuri; Kristel J.M. Janssen; Johannes B. Reitsma; Patrick M. Bossuyt; Karel G.M. Moons

In studies of diagnostic accuracy, the performance of an index test is assessed by verifying its results against those of a reference standard. If verification of index-test results by the preferred reference standard can be performed only in a subset of subjects, an alternative reference test could be given to the remainder. The drawback of this so-called differential-verification design is that the second reference test is often of lesser quality, or defines the target condition in a different way. Incorrectly treating results of the 2 reference standards as equivalent will lead to differential-verification bias. The Bayesian methods presented in this paper use a single model to (1) acknowledge the different nature of the 2 reference standards and (2) make simultaneous inferences about the population prevalence and the sensitivity, specificity, and predictive values of the index test with respect to both reference tests, in relation to latent disease status. We illustrate this approach using data from a study on the accuracy of the elbow extension test for diagnosis of elbow fractures in patients with elbow injury, using either radiography or follow-up as reference standards.


Annals of Epidemiology | 2011

Correcting for Partial Verification Bias: A Comparison of Methods

Joris A. H. de Groot; Kristel J.M. Janssen; Aeilko H. Zwinderman; Patrick M. Bossuyt; Johannes B. Reitsma; Karel G.M. Moons

PURPOSE A common problem in diagnostic research is that the reference standard has not been carried out in all patients. This partial verification may lead to biased accuracy measures of the test under study. The authors studied the performance of multiple imputation and the conventional correction method proposed by Begg and Greenes under a range of different situations of partial verification. METHODS In a series of simulations, using a previously published deep venous thrombosis data set (n = 1292), the authors set the outcome of the reference standard to missing based on various underlying mechanisms and by varying the total number of missing values. They then compared the performance of the different correction methods. RESULTS The results of the study show that when the mechanism of missing reference data is known, accuracy measures can easily be correctly adjusted using either the Begg and Greenes method, or multiple imputation. In situations where the mechanism of missing reference data is complex or unknown, we recommend using multiple imputation methods to correct. CONCLUSIONS These methods can easily apply for both continuous and categorical variables, are readily available in statistical software and give reliable estimates of the missing reference data.


Lancet Infectious Diseases | 2017

A host-protein based assay to differentiate between bacterial and viral infections in preschool children (OPPORTUNITY): a double-blind, multicentre, validation study.

Chantal Van Houten; Joris A. H. de Groot; Adi Klein; Isaac Srugo; Irena Chistyakov; Wouter de Waal; Clemens B. Meijssen; Wim Avis; Tom F. W. Wolfs; Yael Shachor-Meyouhas; Michal Stein; Elisabeth A. M. Sanders; Louis Bont

BACKGROUND A physician is frequently unable to distinguish bacterial from viral infections. ImmunoXpert is a novel assay combining three proteins: tumour necrosis factor-related apoptosis-inducing ligand (TRAIL), interferon gamma induced protein-10 (IP-10), and C-reactive protein (CRP). We aimed to externally validate the diagnostic accuracy of this assay in differentiating between bacterial and viral infections and to compare this test with commonly used biomarkers. METHODS In this prospective, double-blind, international, multicentre study, we recruited children aged 2-60 months with lower respiratory tract infection or clinical presentation of fever without source at four hospitals in the Netherlands and two hospitals in Israel. A panel of three experienced paediatricians adjudicated a reference standard diagnosis for all patients (ie, bacterial or viral infection) using all available clinical and laboratory information, including a 28-day follow-up assessment. The panel was masked to the assay results. We identified majority diagnosis when two of three panel members agreed on a diagnosis and unanimous diagnosis when all three panel members agreed on the diagnosis. We calculated the diagnostic performance (ie, sensitivity, specificity, positive predictive value, and negative predictive value) of the index test in differentiating between bacterial (index test positive) and viral (index test negative) infection by comparing the test classification with the reference standard outcome. FINDINGS Between Oct 16, 2013 and March 1, 2015, we recruited 777 children, of whom 577 (mean age 21 months, 56% male) were assessed. The majority of the panel diagnosed 71 cases as bacterial infections and 435 as viral infections. In another 71 patients there was an inconclusive panel diagnosis. The assay distinguished bacterial from viral infections with a sensitivity of 86·7% (95% CI 75·8-93·1), a specificity of 91·1% (87·9-93·6), a positive predictive value of 60·5% (49·9-70·1), and a negative predictive value of 97·8% (95·6-98·9). In the more clear cases with unanimous panel diagnosis (n=354), sensitivity was 87·8% (74·5-94·7), specificity 93·0% (89·6-95·3), positive predictive value 62·1% (49·2-73·4), and negative predictive value 98·3% (96·1-99·3). INTERPRETATION This external validation study shows the diagnostic value of a three-host protein-based assay to differentiate between bacterial and viral infections in children with lower respiratory tract infection or fever without source. This diagnostic based on CRP, TRAIL, and IP-10 has the potential to reduce antibiotic misuse in young children. FUNDING MeMed Diagnostics.


American Journal of Epidemiology | 2012

Adjusting for Partial Verification or Workup Bias in Meta-Analyses of Diagnostic Accuracy Studies

Joris A. H. de Groot; Nandini Dendukuri; Kristel J.M. Janssen; Johannes B. Reitsma; James M. Brophy; Lawrence Joseph; Patrick M. Bossuyt; Karel G.M. Moons

A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.


BMC Medical Research Methodology | 2016

No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

Maarten van Smeden; Joris A. H. de Groot; Karel G. M. Moons; Gary S. Collins; Douglas G. Altman; Marinus J.C. Eijkemans; Johannes B. Reitsma

BackgroundTen events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies.MethodsThe current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth’s correction, are compared.ResultsThe results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect (‘separation’). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth’s correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation.ConclusionsThe current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

Collaboration


Dive into the Joris A. H. de Groot's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nandini Dendukuri

McGill University Health Centre

View shared research outputs
Top Co-Authors

Avatar

Benno J. Rensing

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge