Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yvonne Vergouwe is active.

Publication


Featured researches published by Yvonne Vergouwe.


Journal of Clinical Epidemiology | 2001

Internal validation of predictive models: Efficiency of some procedures for logistic regression analysis

Ewout W. Steyerberg; Frank E. Harrell; Gerard J. J. M. Borsboom; Marinus J.C. Eijkemans; Yvonne Vergouwe; J. Dik F. Habbema

The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.


Journal of Clinical Epidemiology | 2001

Original articleInternal validation of predictive models: Efficiency of some procedures for logistic regression analysis

Ewout W. Steyerberg; Frank E. Harrell; Gerard J. J. M. Borsboom; Marinus J.C. Eijkemans; Yvonne Vergouwe; J. Dik F. Habbema

The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.


The Lancet | 2008

Effectiveness of antipsychotic drugs in first-episode schizophrenia and schizophreniform disorder: an open randomised clinical trial

René S. Kahn; W. Wolfgang Fleischhacker; Han Boter; Michael Davidson; Yvonne Vergouwe; Ireneus P. M. Keet; Mihai D. Gheorghe; Janusz K. Rybakowski; Silvana Galderisi; Jan Libiger; Martina Hummer; Sonia Dollfus; Juan José López-Ibor; Luchezar G. Hranov; Wolfgang Gaebel; Joseph Peuskens; Nils Lindefors; Anita Riecher-Rössler; Diederick E. Grobbee

BACKGROUND Second-generation antipsychotic drugs were introduced over a decade ago for the treatment of schizophrenia; however, their purported clinical effectiveness compared with first-generation antipsychotic drugs is still debated. We aimed to compare the effectiveness of second-generation antipsychotic drugs with that of a low dose of haloperidol, in first-episode schizophrenia. METHODS We did an open randomised controlled trial of haloperidol versus second-generation antipsychotic drugs in 50 sites, in 14 countries. Eligible patients were aged 18-40 years, and met diagnostic criteria for schizophrenia, schizophreniform disorder, or schizoaffective disorder. 498 patients were randomly assigned by a web-based online system to haloperidol (1-4 mg per day; n=103), amisulpride (200-800 mg per day; n=104), olanzapine (5-20 mg per day; n=105), quetiapine (200-750 mg per day; n=104), or ziprasidone (40-160 mg per day; n=82); follow-up was at 1 year. The primary outcome measure was all-cause treatment discontinuation. Patients and their treating physicians were not blinded to the assigned treatment. Analysis was by intention to treat. This study is registered as an International Standard Randomised Controlled Trial, number ISRCTN68736636. FINDINGS The number of patients who discontinued treatment for any cause within 12 months was 63 (Kaplan-Meier estimate 72%) for haloperidol, 32 (40%) for amisulpride, 30 (33%) for olanzapine, 51 (53%) for quetiapine, and 31 (45%) for ziprasidone. Comparisons with haloperidol showed lower risks for any-cause discontinuation with amisulpride (hazard ratio [HR] 0.37, [95% CI 0.24-0.57]), olanzapine (HR 0.28 [0.18-0.43]), quetiapine (HR 0.52 [0.35-0.76]), and ziprasidone (HR 0.51 [0.32-0.81]). However, symptom reductions were virtually the same in all the groups, at around 60%. INTERPRETATION This pragmatic trial suggests that clinically meaningful antipsychotic treatment of first-episode of schizophrenia is achievable, for at least 1 year. However, we cannot conclude that second-generation drugs are more efficacious than is haloperidol, since discontinuation rates are not necessarily consistent with symptomatic improvement.


BMJ | 2009

Prognosis and prognostic research: validating a prognostic model.

Douglas G. Altman; Yvonne Vergouwe; Patrick Royston; Karel G.M. Moons

Prognostic models are of little clinical value unless they are shown to work in other samples. Douglas Altman and colleagues describe how to validate models and discuss some of the problems


BMJ | 2009

Prognosis and prognostic research: Developing a prognostic model

Patrick Royston; Karel G.M. Moons; Douglas G. Altman; Yvonne Vergouwe

In the second article in their series, Patrick Royston and colleagues describe different approaches to building clinical prognostic models


BMJ | 2009

Prognosis and prognostic research: what, why, and how?

Karel G.M. Moons; Patrick Royston; Yvonne Vergouwe; Diederick E. Grobbee; Douglas G. Altman

Doctors have little specific research to draw on when predicting outcome. In this first article in a series Karel Moons and colleagues explain why research into prognosis is important and how to design such research


BMJ | 2009

Prognosis and prognostic research: application and impact of prognostic models in clinical practice

Karel G.M. Moons; Douglas G. Altman; Yvonne Vergouwe; Patrick Royston

An accurate prognostic model is of no benefit if it is not generalisable or doesn’t change behaviour. In the last article in their series Karel Moons and colleagues discuss how to determine the practical value of models


Heart | 2012

Risk prediction models: I. Development, internal validation, and assessing the incremental value of a new (bio)marker

Karel G.M. Moons; Andre Pascal Kengne; Mark Woodward; Patrick Royston; Yvonne Vergouwe; Douglas G. Altman; Diederick E. Grobbee

Prediction models are increasingly used to complement clinical reasoning and decision making in modern medicine in general, and in the cardiovascular domain in particular. Developed models first and foremost need to provide accurate and (internally and externally) validated estimates of probabilities of specific health conditions or outcomes in targeted patients. The adoption of such models must guide physicians decision making and an individuals behaviour, and consequently improve individual outcomes and the cost-effectiveness of care. In a series of two articles we review the consecutive steps generally advocated for risk prediction model research. This first article focuses on the different aspects of model development studies, from design to reporting, how to estimate a models predictive performance and the potential optimism in these estimates using internal validation techniques, and how to quantify the added or incremental value of new predictors or biomarkers (of whatever type) to existing predictors. Each step is illustrated with empirical examples from the cardiovascular field.


Heart | 2012

Risk prediction models: II. External validation, model updating, and impact assessment

Karel G.M. Moons; Andre Pascal Kengne; Diederick E. Grobbee; Patrick Royston; Yvonne Vergouwe; Douglas G. Altman; Mark Woodward

Clinical prediction models are increasingly used to complement clinical reasoning and decision-making in modern medicine, in general, and in the cardiovascular domain, in particular. To these ends, developed models first and foremost need to provide accurate and (internally and externally) validated estimates of probabilities of specific health conditions or outcomes in the targeted individuals. Subsequently, the adoption of such models by professionals must guide their decision-making, and improve patient outcomes and the cost-effectiveness of care. In the first paper of this series of two companion papers, issues relating to prediction model development, their internal validation, and estimating the added value of a new (bio)marker to existing predictors were discussed. In this second paper, an overview is provided of the consecutive steps for the assessment of the models predictive performance in new individuals (external validation studies), how to adjust or update existing models to local circumstances or with new predictors, and how to investigate the impact of the uptake of prediction models on clinical decision-making and patient outcomes (impact studies). Each step is illustrated with empirical examples from the cardiovascular field.


The Lancet | 2013

Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II

Vasim Farooq; David van Klaveren; Ewout W. Steyerberg; Emanuele Meliga; Yvonne Vergouwe; Alaide Chieffo; Arie Pieter Kappetein; Antonio Colombo; David R. Holmes; Michael J. Mack; Ted Feldman; Marie Claude Morice; Elisabeth Ståhle; Yoshinobu Onuma; Marie Angèle Morel; Hector M. Garcia-Garcia; Gerrit Anne van Es; Keith D. Dawkins; Friedrich W. Mohr; Patrick W. Serruys

BACKGROUND The anatomical SYNTAX score is advocated in European and US guidelines as an instrument to help clinicians decide the optimum revascularisation method in patients with complex coronary artery disease. The absence of an individualised approach and of clinical variables to guide decision making between coronary artery bypass graft surgery (CABG) and percutaneous coronary intervention (PCI) are limitations of the SYNTAX score. SYNTAX score II aimed to overcome these limitations. METHODS SYNTAX score II was developed by applying a Cox proportional hazards model to results of the randomised all comers SYNTAX trial (n=1800). Baseline features with strong associations to 4-year mortality in either the CABG or the PCI settings (interactions), or in both (predictive accuracy), were added to the anatomical SYNTAX score. Comparisons of 4-year mortality predictions between CABG and PCI were made for each patient. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. External validation was done in the multinational all comers DELTA registry (n=2891), a heterogeneous population that included patients with three-vessel disease (26%) or complex coronary artery disease (anatomical SYNTAX score ≥33, 30%) who underwent CABG or PCI. The SYNTAX trial is registered with ClinicalTrials.gov, number NCT00114972. FINDINGS SYNTAX score II contained eight predictors: anatomical SYNTAX score, age, creatinine clearance, left ventricular ejection fraction (LVEF), presence of unprotected left main coronary artery (ULMCA) disease, peripheral vascular disease, female sex, and chronic obstructive pulmonary disease (COPD). SYNTAX score II significantly predicted a difference in 4-year mortality between patients undergoing CABG and those undergoing PCI (p(interaction) 0·0037). To achieve similar 4-year mortality after CABG or PCI, younger patients, women, and patients with reduced LVEF required lower anatomical SYNTAX scores, whereas older patients, patients with ULMCA disease, and those with COPD, required higher anatomical SYNTAX scores. Presence of diabetes was not important for decision making between CABG and PCI (p(interaction) 0·67). SYNTAX score II discriminated well in all patients who underwent CABG or PCI, with concordance indices for internal (SYNTAX trial) validation of 0·725 and for external (DELTA registry) validation of 0·716, which were substantially higher than for the anatomical SYNTAX score alone (concordance indices of 0·567 and 0·612, respectively). A nomogram was constructed that allowed for an accurate individualised prediction of 4-year mortality in patients proposing to undergo CABG or PCI. INTERPRETATION Long-term (4-year) mortality in patients with complex coronary artery disease can be well predicted by a combination of anatomical and clinical factors in SYNTAX score II. SYNTAX score II can better guide decision making between CABG and PCI than the original anatomical SYNTAX score. FUNDING Boston Scientific Corporation.

Collaboration


Dive into the Yvonne Vergouwe's collaboration.

Top Co-Authors

Avatar

Ewout W. Steyerberg

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David van Klaveren

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

Ruud G. Nijman

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Daan Nieboer

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

J. Dik F. Habbema

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge