Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristel J.M. Janssen is active.

Publication


Featured researches published by Kristel J.M. Janssen.


Journal of Clinical Epidemiology | 2010

Missing covariate data in medical research: to impute is better than to ignore.

Kristel J.M. Janssen; A. Rogier T. Donders; Frank E. Harrell; Yvonne Vergouwe; Qingxia Chen; Diederick E. Grobbee; Karel G.M. Moons

OBJECTIVE We compared popular methods to handle missing data with multiple imputation (a more sophisticated method that preserves data). STUDY DESIGN AND SETTING We used data of 804 patients with a suspicion of deep venous thrombosis (DVT). We studied three covariates to predict the presence of DVT: d-dimer level, difference in calf circumference, and history of leg trauma. We introduced missing values (missing at random) ranging from 10% to 90%. The risk of DVT was modeled with logistic regression for the three methods, that is, complete case analysis, exclusion of d-dimer level from the model, and multiple imputation. RESULTS Multiple imputation showed less bias in the regression coefficients of the three variables and more accurate coverage of the corresponding 90% confidence intervals than complete case analysis and dropping d-dimer level from the analysis. Multiple imputation showed unbiased estimates of the area under the receiver operating characteristic curve (0.88) compared with complete case analysis (0.77) and when the variable with missing values was dropped (0.65). CONCLUSION As this study shows that simple methods to deal with missing data can lead to seriously misleading results, we advise to consider multiple imputation. The purpose of multiple imputation is not to create data, but to prevent the exclusion of observed data.


BMJ | 2013

Diagnostic accuracy of conventional or age adjusted D-dimer cut-off values in older patients with suspected venous thromboembolism: systematic review and meta-analysis

Henrike J. Schouten; Geert-Jan Geersing; Huiberdine L. Koek; Nicolaas P.A. Zuithoff; Kristel J.M. Janssen; Renée A. Douma; Johannes J. M. van Delden; Karel G. M. Moons; Johannes B. Reitsma

Objective To review the diagnostic accuracy of D-dimer testing in older patients (>50 years) with suspected venous thromboembolism, using conventional or age adjusted D-dimer cut-off values. Design Systematic review and bivariate random effects meta-analysis. Data sources We searched Medline and Embase for studies published before 21 June 2012 and we contacted the authors of primary studies. Study selection Primary studies that enrolled older patients with suspected venous thromboembolism in whom D-dimer testing, using both conventional (500 µg/L) and age adjusted (age×10 µg/L) cut-off values, and reference testing were performed. For patients with a non-high clinical probability, 2×2 tables were reconstructed and stratified by age category and applied D-dimer cut-off level. Results 13 cohorts including 12 497 patients with a non-high clinical probability were included in the meta-analysis. The specificity of the conventional cut-off value decreased with increasing age, from 57.6% (95% confidence interval 51.4% to 63.6%) in patients aged 51-60 years to 39.4% (33.5% to 45.6%) in those aged 61-70, 24.5% (20.0% to 29.7% in those aged 71-80, and 14.7% (11.3% to 18.6%) in those aged >80. Age adjusted cut-off values revealed higher specificities over all age categories: 62.3% (56.2% to 68.0%), 49.5% (43.2% to 55.8%), 44.2% (38.0% to 50.5%), and 35.2% (29.4% to 41.5%), respectively. Sensitivities of the age adjusted cut-off remained above 97% in all age categories. Conclusions The application of age adjusted cut-off values for D-dimer tests substantially increases specificity without modifying sensitivity, thereby improving the clinical utility of D-dimer testing in patients aged 50 or more with a non-high clinical probability.


BMJ | 2009

Excluding venous thromboembolism using point of care D-dimer tests in outpatients: a diagnostic meta-analysis

Geert-Jan Geersing; Kristel J.M. Janssen; Ruud Oudega; L Bax; Arno W. Hoes; Johannes B. Reitsma; K. G. M. Moons

Objective To review the evidence on the diagnostic accuracy of the currently available point of care D-dimer tests for excluding venous thromboembolism. Design Systematic review of research on the accuracy of point of care D-dimer tests, using bivariate regression to examine sources of variation and to estimate sensitivity and specificity. Data sources Studies on the diagnostic accuracy of point of care D-dimer tests published between January 1995 and September 2008 and available in either Medline or Embase. Review methods The analysis included studies that compared point of care D-dimer tests with predefined reference criteria for venous thromboembolism, enrolled consecutive outpatients, and allowed for construction of a 2×2 table. Results 23 studies (total number of patients 13 959, range in mean age 38-65 years, range of venous thromboembolism prevalence 4-51%) were included in the meta-analysis. The studies reported two qualitative point of care D-dimer tests (SimpliRED D-dimer (n=12) and Clearview Simplify D-dimer (n=7)) and two quantitative point of care D-dimer tests (Cardiac D-dimer (n=4) and Triage D-dimer (n=2)). Overall sensitivity ranged from 0.85 (95% confidence interval 0.78 to 0.90) to 0.96 (0.91 to 0.98) and overall specificity from 0.48 (0.33 to 0.62) to 0.74 (0.69 to 0.78). The two quantitative tests Cardiac D-dimer and Triage D-dimer scored most favourably. Conclusions In outpatients suspected of venous thromboembolism, point of care D-dimer tests can contribute important information and guide patient management, notably in low risk patients (that is, those patients with a low score on a clinical decision rule).


Atherosclerosis | 2010

Comparing coronary artery calcium and thoracic aorta calcium for prediction of all-cause mortality and cardiovascular events on low-dose non-gated computed tomography in a high-risk population of heavy smokers.

Peter C. Jacobs; Mathias Prokop; Yolanda van der Graaf; Martijn J. A. Gondrie; Kristel J.M. Janssen; Harry J. de Koning; Ivana Išgum; Rob J. van Klaveren; Matthijs Oudkerk; Bram van Ginneken; Willem P. Th. M. Mali

BACKGROUND Coronary artery calcium (CAC) and thoracic aorta calcium (TAC) can be detected simultaneously on low-dose, non-gated computed tomography (CT) scans. CAC has been shown to predict cardiovascular (CVD) and coronary (CHD) events. A comparable association between TAC and CVD events has yet to be established, but TAC could be a more reproducible alternative to CAC in low-dose, non-gated CT. This study compared CAC and TAC as independent predictors of all-cause mortality and cardiovascular events in a population of heavy smokers using low-dose, non-gated CT. METHODS Within the NELSON study, a population-based lung cancer screening trial, the CT screen group consisted of 7557 heavy smokers aged 50-75 years. Using a case-cohort study design, CAC and TAC scores were calculated in a total of 958 asymptomatic subjects who were followed up for all-cause death, and CVD, CHD and non-cardiac events (stroke, aortic aneurysm, peripheral arterial occlusive disease). We used Cox proportional-hazard regression to compute hazard ratios (HRs) with adjustment for traditional cardiovascular risk factors. RESULTS A close association between the prevalence of TAC and increasing levels of CAC was established (p<0.001). Increasing CAC and TAC risk categories were associated with all-cause mortality (p for trend=0.01 and 0.001, respectively) and CVD events (p for trend <0.001 and 0.03, respectively). Compared with the lowest quartile (reference category), multivariate-adjusted HRs across categories of CAC were higher (all-cause mortality, HR: 9.13 for highest quartile; CVD events, HR: 4.46 for highest quartile) than of TAC scores (HR: 5.45 and HR: 2.25, respectively). However, TAC is associated with non-coronary events (HR: 4.69 for highest quartile, p for trend=0.01) and CAC was not (HR: 3.06 for highest quartile, p for trend=0.40). CONCLUSIONS CAC was found to be a stronger predictor than TAC of all-cause mortality and CVD events in a high-risk population of heavy smokers scored on low-dose, non-gated CT. TAC, however, is stronger associated with non-cardiac events than CAC and could prove to be a preferred marker for these events.


Anesthesia & Analgesia | 2008

The risk of severe postoperative pain: modification and validation of a clinical prediction rule.

Kristel J.M. Janssen; Cor J. Kalkman; Diederick E. Grobbee; Gouke J. Bonsel; Karel G.M. Moons; Yvonne Vergouwe

BACKGROUND: Recently, a prediction rule was developed to preoperatively predict the risk of severe pain in the first postoperative hour in surgical inpatients. We aimed to modify the rule to enhance its use in both surgical inpatients and outpatients (ambulatory patients). Subsequently, we prospectively tested the modified rule in patients who underwent surgery later in time and in another hospital (external validation). METHODS: The rule was originally developed from the data of 1395 adult inpatients. We modified the rule with the data of 549 outpatients who underwent surgery between 1997 and 1999 in the same center (Academic Medical Center Amsterdam, The Netherlands). Furthermore, we tested the performance of the modified rule in 1035 in- and outpatients who underwent surgery in 2004, in the University Medical Center Utrecht, The Netherlands (external validation). Performance was quantified by the rules calibration (agreement between observed frequencies and predicted risks) and discrimination (ability to distinguish between patients at high and low risk). RESULTS: Modification of the original rule to enhance prediction in outpatients included reclassification of the predictor “type of surgery,” addition of the predictor “surgical setting” (ambulatory surgery: yes/no) and addition of interaction terms between surgical setting and the other predictors. One-third of the patients in the Utrecht cohort reported severe postoperative pain (36%), compared to 62% of the patients in the Amsterdam cohort. The distribution of most predictors was similar in the two cohorts, although the patients in the Utrecht cohort were slightly older, more often underwent ambulatory surgery and had large expected incision sizes less often than patients in the Amsterdam cohort. The modified prediction rule showed good calibration, when an adjusted intercept was used for the lower incidence in the Utrecht cohort. The discrimination was reasonable (area under the Receiver Operating Characteristic curve 0.65 [95% confidence interval 0.57–0.73]). CONCLUSIONS: A previously developed prediction rule to predict severe postoperative pain was modified to allow use in both inpatients and outpatients. By validating the rule in patients who underwent surgery several years later in another hospital, it was shown that the rule could be generalized in time and place. We demonstrated that, instead of deriving new prediction rules for new populations, a simple adjustment may be enough to recalibrate prediction rules for new populations. This is in line with the perception that external validation and updating of prediction rules is a continuing and multistage process.


Canadian Journal of Anaesthesia-journal Canadien D Anesthesie | 2009

A simple method to adjust clinical prediction models to local circumstances

Kristel J.M. Janssen; Yvonne Vergouwe; Cor J. Kalkman; Diederick E. Grobbee; Karel G.M. Moons

IntroductionClinical prediction models estimate the risk of having or developing a particular outcome or disease. Researchers often develop a new model when a previously developed model is validated and the performance is poor. However, the model can be adjusted (updated) using the new data. The updated model is then based on both the development and validation data. We show how a simple updating method may suffice to update a clinical prediction model.MethodsA prediction model that preoperatively predicts the risk of severe postoperative pain was developed with multivariable logistic regression from the data of 1944 surgical patients in the Academic Medical Center Amsterdam, the Netherlands. We studied the predictive performance of the model in 1,035 new patients, scheduled for surgery at a later time in the University Medical Center Utrecht, the Netherlands. We assessed the calibration (agreement between predicted risks and the observed frequencies of an outcome) and discrimination (ability of the model to distinguish between patients with and without postoperative pain). When the incidence of the outcome is different, all predicted risks may be systematically over- or underestimated. Hence, the intercept of the model can be adjusted (updating).ResultsThe predicted risks were systematically higher than the observed frequencies, corresponding to a difference in the incidence of postoperative pain between the development (62%) and validation set (36%). The updated model resulted in better calibration.DiscussionWhen a clinical prediction model in new patients does not show adequate performance, an alternative to developing a new model is to update the prediction model with new data. The updated model will be based on more patient data, and may yield better risk estimates.RésuméIntroductionLes modèles de prédiction clinique évaluent le risque de présenter ou de manifester un devenir ou une maladie en particulier. Les chercheurs élaborent souvent un nouveau modèle lorsqu’un modèle élaboré précédemment est validé mais que ses performances sont peu concluantes. Toutefois, un modèle peut être ajusté (mis à jour) aux nouvelles données. Le modèle mis à jour est ensuite basé aussi bien sur les données d’élaboration que de validation. Nous montrons comment une méthode simple de mise à jour peut suffire à mettre à jour un modèle de prédiction clinique.MéthodeUn modèle de prédiction qui prédit avant l’opération le risque de douleur postopératoire grave a été élaboré à l’aide de la méthode de régression logistique multivariée appliquée aux données de 1944 patients chirurgicaux du Centre médical universitaire d’Amsterdam, aux Pays-Bas. Nous avons étudié la performance prédictive du modèle chez 1 035 nouveaux patients qui devaient subir une chirurgie plus tard au Centre médical universitaire d’Utrecht, aux Pays-Bas. Nous avons évalué le calibrage (accord entre les risques prédits et les fréquences observées d’un devenir) et la discrimination (capacité du modèle de distinguer entre les patients avec ou sans douleurs postopératoires). Lorsque l’incidence du devenir est différente, tous les risques prédits peuvent être systématiquement sur- ou sous-estimés. Ainsi, le point d’intersection du modèle peut être ajusté (mise à jour).RésultatsLes risques prédits étaient systématiquement plus élevés que les fréquences observées, ce qui correspond à une différence de l’incidence des douleurs postopératoires entre les données d’élaboration (62 %) et celles de validation (36 %). Le modèle mis à jour a généré un meilleur calibrage.ConclusionLorsqu’un modèle de prédiction clinique chez de nouveaux patients ne génère pas une performance adaptée, une alternative à l’élaboration d’un nouveau modèle consiste en la mise à jour du modèle de prédiction avec de nouvelles données. Le modèle mis à jour sera basé sur davantage de données patients, et pourrait donner de meilleures estimations des risques.


Journal of Vascular Surgery | 2008

The Glasgow Aneurysm Score as a tool to predict 30-day and 2-year mortality in the patients from the Dutch Randomized Endovascular Aneurysm Management trial

Annette F. Baas; Kristel J.M. Janssen; Monique Prinssen; Eric Buskens; Jan D. Blankensteijn

OBJECTIVE Randomized trials have shown that endovascular repair (EVAR) of an abdominal aortic aneurysm (AAA) has a lower perioperative mortality than conventional open repair (OR). However, this initial survival advantage disappears after 1 year. To make EVAR cost-effective, patient selection should be improved. The Glasgow Aneurysm Score (GAS) estimates preoperative risk profiles that predict perioperative outcomes after OR. It was recently shown to predict perioperative and long-term mortality after EVAR as well. Here, we applied the GAS to patients from the Dutch Randomized Endovascular Aneurysm Repair (DREAM) trial and compared the applicability of the GAS between open repair and EVAR. METHODS A multicenter, randomized trial was conducted to compare OR with EVAR in 345 AAA patients. The GAS was calculated (age + [7 points for myocardial disease] + [10 points for cerebrovascular disease] + [14 points for renal disease]). Optimal cutoff values were determined, and test characteristics for 30-day and 2-year mortality were computed. RESULTS The mean GAS was 74.7 +/- 9.3 for OR patients and 75.9 +/- 9.7 for EVAR patients. Two EVAR patients and eight OR patients died < or =30 days postoperatively. The area under the receiver-operator characteristic curve (AUC) was 0.79 for OR patients and 0.87 for EVAR patients. The optimal GAS cutoff value was 75.5 for OR and 86.5 for EVAR. By 2 years postoperatively, 18 patients had died in both the EVAR and the OR patient groups. The AUC was 0.74 for OR patients and 0.78 for EVAR patients. The optimal GAS cutoff value was 74.5 for OR and 77.5 for EVAR. CONCLUSION This is the first evaluation of the GAS in a randomized trial comparing AAA patients treated with OR and EVAR. The GAS can be used for prediction of 30-day and 2-year mortality in both OR and EVAR, but in patients that are suitable for both procedures, it is a better predictor for EVAR than for OR patients. In this study, the GAS was most valuable in identifying low-risk patients but not very useful for the identification of the small number of high-risk patients.


BMJ | 2012

Validation of two age dependent D-dimer cut-off values for exclusion of deep vein thrombosis in suspected elderly patients in primary care: retrospective, cross sectional, diagnostic analysis

Henrike J. Schouten; Huiberdine L. Koek; Ruud Oudega; Geert-Jan Geersing; Kristel J.M. Janssen; Johannes J. M. van Delden; Karel G. M. Moons

Objective To determine whether the use of age adapted D-dimer cut-off values can be translated to primary care patients who are suspected of deep vein thrombosis. Design Retrospective, cross sectional diagnostic study. Setting 110 primary care doctors affiliated with three hospitals in the Netherlands. Participants 1374 consecutive patients (936 (68.1%) aged >50 years) with clinically suspected deep vein thrombosis. Main outcome measures Proportion of patients with D-dimer values below two proposed age adapted cut-off levels (age in years×10 μg/L in patients aged >50 years, or 750 μg/L in patients aged ≥60 years), in whom deep vein thrombosis could be excluded; and the number of false negative results. Results Using the Wells score, 647 patients had an unlikely clinical probability of deep vein thrombosis. In these patients (at all ages), deep vein thrombosis could be excluded in 309 (47.8%) using the age dependent cut-off value compared with 272 (42.0%) using the conventional cut-off value of 500 μg/L (increase 5.7%, 95% confidence interval 4.1% to 7.8%). This exclusion rate resulted in 0.5% and 0.3% false negative cases, respectively (increase 0.2%, 0.004% to 8.6%).The increase in exclusion rate by using the age dependent cut-off value was highest in the oldest patients. In patients older than 80 years, deep vein thrombosis could be safely excluded in 22 (35.5%) patients using the age dependent cut-off value compared with 13 (21.0%) using the conventional cut-off value (increase 14.5%, 6.8% to 25.8%). Compared with the age dependent cut-off value, the cut-off value of 750 μg/L had a similar exclusion rate (307 (47.4%) patients) and false negative rate (0.3%). Conclusions Combined with a low clinical probability of deep vein thrombosis, use of the age dependent D-dimer cut-off value for patients older than 50 years or the cut-off value of 750 μg/L for patients aged 60 years and older resulted in a considerable increase in the proportion of patients in primary care in whom deep vein thrombosis could be safely excluded, compared with the conventional cut-off value of 500 μg/L.


Statistics in Medicine | 2008

Multiple imputation to correct for partial verification bias revisited.

J. A. H. de Groot; Kristel J.M. Janssen; Aeilko H. Zwinderman; Karel G.M. Moons; Johannes B. Reitsma

Partial verification refers to the situation where a subset of patients is not verified by the reference (gold) standard and is excluded from the analysis. If partial verification is present, the observed (naive) measures of accuracy such as sensitivity and specificity are most likely to be biased. Recently, Harel and Zhou showed that partial verification can be considered as a missing data problem and that multiple imputation (MI) methods can be used to correct for this bias. They claim that even in simple situations where the verification is random within strata of the index test results, the so-called Begg and Greenes (B&G) correction method underestimates sensitivity and overestimates specificity as compared with the MI method. However, we were able to demonstrate that the B&G method produces similar results as MI, and that the claimed difference has been caused by a computational error. Additional research is needed to better understand which correction methods should be preferred in more complex scenarios of missing reference test outcome in diagnostic research.


BMJ | 2011

Verification problems in diagnostic accuracy studies: consequences and solutions

Joris A. H. de Groot; Patrick M. Bossuyt; Johannes B. Reitsma; Anne Wilhelmina Saskia Rutjes; Nandini Dendukuri; Kristel J.M. Janssen; Karel G.M. Moons

The accuracy of a diagnostic test or combination of tests (such as in a diagnostic model) is the ability to correctly identify patients with or without the target disease. In studies of diagnostic accuracy, the results of the test or model under study are verified by comparing them with results of a reference standard, applied to the same patients, to verify disease status (see first panel in figure⇓).1 Measures such as predictive values, post-test probabilities, ROC (receiver operating characteristics) curves, sensitivity, specificity, likelihood ratios, and odds ratios express how well the results of an index test agree with the outcome of the reference standard.2 Biased and exaggerated estimates of diagnostic accuracy can lead to inefficiencies in diagnostic testing in practice, unnecessary costs, and physicians making incorrect treatment decisions. Diagnostic accuracy studies with ( a ) complete verification by the same reference standard, ( b ) partial verification, or ( c )differential verification The reference standard ideally provides error-free classification of the disease outcome presence or absence. In some cases, it is not possible to verify the definitive presence or absence of disease in all patients with the (single) reference standard, which may result in bias. In this paper, we describe the most important types of disease verification problems using examples from published diagnostic accuracy studies. We also propose solutions to alleviate the associated biases. Often not all study subjects who undergo the index test receive the reference standard, leading to missing data on disease outcome (see middle panel in figure⇑). The bias associated with such situations of partial verification is known as partial verification bias, work-up bias, or referral bias.3 4 5 ### Clinical examples of partial verification Various mechanisms can lead to partial verification (see examples in table 1⇓). View this table: Table 1  Examples of diagnostic accuracy studies with problems in disease verification When the condition of interest …

Collaboration


Dive into the Kristel J.M. Janssen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yvonne Vergouwe

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge