Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maarten van Smeden is active.

Publication


Featured researches published by Maarten van Smeden.


BMJ | 2013

Value of composite reference standards in diagnostic research

Christiana A. Naaktgeboren; Loes C. M. Bertens; Maarten van Smeden; Joris A. H. de Groot; Karel G.M. Moons; Johannes B. Reitsma

Combining several tests is a common way to improve the final classification of disease status in diagnostic accuracy studies but is often used ambiguously. This article gives advice on proper use and reporting of composite reference standards


Annals of Internal Medicine | 2013

Evaluating Diagnostic Accuracy in the Face of Multiple Reference Standards

Christiana A. Naaktgeboren; Maarten van Smeden; Johannes B. Reitsma

A universal challenge in studies that quantify the accuracy of diagnostic tests is establishing whether each participant has the disease of interest. Ideally, the same preferred reference standard would be used for all participants; however, for practical or ethical reasons, alternative reference standards that are often less accurate are frequently used instead. The use of different reference standards across participants in a single study is known as differential verification.Differential verification can cause severely biased accuracy estimates of the test or model being studied. Many variations of differential verification exist, but not all introduce the same risk of bias. A risk-of-bias assessment requires detailed information about which participants receive which reference standards and an estimate of the accuracy of the alternative reference standard. This article classifies types of differential verification and explores how they can lead to bias. It also provides guidance on how to report results and assess the risk of bias when differential verification occurs and highlights potential ways to correct for the bias.


BMC Medical Research Methodology | 2016

No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

Maarten van Smeden; Joris A. H. de Groot; Karel G. M. Moons; Gary S. Collins; Douglas G. Altman; Marinus J.C. Eijkemans; Johannes B. Reitsma

BackgroundTen events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies.MethodsThe current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth’s correction, are compared.ResultsThe results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect (‘separation’). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth’s correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation.ConclusionsThe current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.


Statistics in Medicine | 2016

Bias due to composite reference standards in diagnostic accuracy studies

Ian Schiller; Maarten van Smeden; Alula Hadgu; Michael Libman; Johannes B. Reitsma; Nandini Dendukuri

Composite reference standards (CRSs) have been advocated in diagnostic accuracy studies in the absence of a perfect reference standard. The rationale is that combining results of multiple imperfect tests leads to a more accurate reference than any one test in isolation. Focusing on a CRS that classifies subjects as disease positive if at least one component test is positive, we derive algebraic expressions for sensitivity and specificity of this CRS, sensitivity and specificity of a new (index) test compared with this CRS, as well as the CRS-based prevalence. We use as a motivating example the problem of evaluating a new test for Chlamydia trachomatis, an asymptomatic disease for which no gold-standard test exists. As the number of component tests increases, sensitivity of this CRS increases at the expense specificity, unless all tests have perfect specificity. Therefore, such a CRS can lead to significantly biased accuracy estimates of the index test. The bias depends on disease prevalence and accuracy of the CRS. Further, conditional dependence between the CRS and index test can lead to over-estimation of index test accuracy estimates. This commonly-used CRS combines results from multiple imperfect tests in a way that ignores information and therefore is not guaranteed to improve over a single imperfect reference unless each component test has perfect specificity, and the CRS is conditionally independent of the index test. When these conditions are not met, as in the case of C. trachomatis testing, more realistic statistical models should be researched instead of relying on such CRSs.


Journal of Clinical Epidemiology | 2017

Series: Pragmatic trials and real world evidence : Paper 6. Outcome measures in the real world

Paco M. J. Welsing; Katrien Oude Rengerink; Sue Collier; Laurent Eckert; Maarten van Smeden; Antonio Ciaglia; Gaëlle Nachbaur; Sven Trelle; Aliki Taylor; Matthias Egger; Iris Goetz

Results from pragmatic trials should reflect the comparative treatment effects encountered in patients in real-life clinical practice to guide treatment decisions. Therefore, pragmatic trials should focus on outcomes that are relevant to patients, clinical practice, and treatment choices. This sixth article in the series (see Box) discusses different types of outcomes and their suitability for pragmatic trials, design choices for measuring these outcomes, and their implications and challenges. Measuring outcomes in pragmatic trials should not interfere with real-world clinical practice to ensure generalizability of trial results, and routinely collected outcomes should be prioritized. Typical outcomes include mortality, morbidity, functional status, well-being, and resource use. Surrogate endpoints are typically avoided as primary outcome. It is important to measure outcomes over a relevant time horizon and obtain valid and precise results. As pragmatic trials are often open label, a less subjective outcome can reduce bias. Methods that decrease bias or enhance precision of the results, such as standardization and blinding of outcome assessment, should be considered when a high risk of bias or high variability is expected. The selection of outcomes in pragmatic trials should be relevant for decision making and feasible in terms of executing the trial in the context of interest. Therefore, this should be discussed with all stakeholders as early as feasible to ensure the relevance of study results for decision making in clinical practice and the ability to perform the study.


Statistical Methods in Medical Research | 2018

Sample size for binary logistic prediction models: Beyond events per variable criteria

Maarten van Smeden; Karel G.M. Moons; Joris A. H. de Groot; Gary S. Collins; Douglas G. Altman; Marinus J.C. Eijkemans; Johannes B. Reitsma

Binary logistic regression is one of the most frequently applied statistical approaches for developing clinical prediction models. Developers of such models often rely on an Events Per Variable criterion (EPV), notably EPV ≥10, to determine the minimal sample size required and the maximum number of candidate predictors that can be examined. We present an extensive simulation study in which we studied the influence of EPV, events fraction, number of candidate predictors, the correlations and distributions of candidate predictor variables, area under the ROC curve, and predictor effects on out-of-sample predictive performance of prediction models. The out-of-sample performance (calibration, discrimination and probability prediction error) of developed prediction models was studied before and after regression shrinkage and variable selection. The results indicate that EPV does not have a strong relation with metrics of predictive performance, and is not an appropriate criterion for (binary) prediction model development studies. We show that out-of-sample predictive performance can better be approximated by considering the number of predictors, the total sample size and the events fraction. We propose that the development of new sample size criteria for prediction models should be based on these three parameters, and provide suggestions for improving sample size determination.


Journal of Clinical Epidemiology | 2018

Measurement error is often neglected in medical literature: a systematic review

Timo B. Brakenhoff; Marian Mitroiu; Ruth H. Keogh; Karel G.M. Moons; Rolf H.H. Groenwold; Maarten van Smeden

OBJECTIVES In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. STUDY DESIGN AND SETTING Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. RESULTS Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. CONCLUSIONS Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary.


Structural Equation Modeling | 2013

Testing for Two-Way Interactions in the Multigroup Common Factor Model

Maarten van Smeden; David J. Hessen

In this article, a 2-way multigroup common factor model (MG-CFM) is presented. The MG-CFM can be used to estimate interaction effects between 2 grouping variables on 1 or more hypothesized latent variables. For testing the significance of such interactions, a likelihood ratio test is presented. In a simulation study, the robustness of the likelihood ratio test under different sample size conditions is studied and the likelihood ratio test is compared to the Wilkss lambda F test in multivariate analysis of variance (MANOVA) with respect to the approximated power to detect a 2-way interaction effect on a single latent variable. The manipulated factors are the number of indicators, the values of factor loadings, the sample size, and the interaction effect size. The results of the simulation study show that the Type I error rate of the likelihood ratio test is satisfactory and that under all conditions, the approximated power of the likelihood ratio test is considerably higher than that of the Wilkss lambda F test in MANOVA.


Trials | 2018

Towards an appropriate framework to facilitate responsible inclusion of pregnant women in drug development programs

Kit C.B. Roes; Indira S. E. van der Zande; Maarten van Smeden; Rieke van der Graaf

Evidence-based treatment for pregnant women will ultimately require research conducted in the population of pregnant women. Currently, few scholars have addressed the issue of responsible inclusion of pregnant women in drug research. Because of additional risks associated with including pregnant women in drug research and the altered ways in which drugs are processed by the pregnant body, pregnant women cannot be treated as an ordinary subgroup in the various phases of drug development. Instead, responsible inclusion of pregnant women requires careful design and planning of research for pregnant women specifically. Knowledge about these aspects is virtually nonexistent.In this article, we present a practical framework for the responsible inclusion of pregnant women in drug development. We suggest that the framework consists of using a question-based approach with five key questions in combination with three prerequisites which should be addressed when considering inclusion of pregnant women in drug research. The five questions are:A.Can we consider the drug safe (enough) for first exposure in pregnant women and fetuses?B.In which dose range (potentially depending on gestational age) can the drug be considered to remain safe in pregnant women?C.At what dose (regimen, within the range considered safe) can we expect efficacy in pregnant women?D.Can efficacy be confirmed at the target dose, either similar to the initial population or different?E.Can clinical safety be confirmed at a sufficiently acceptable level at the target dose for pregnant women and fetuses, so as to conclude a positive benefit–risk ratio?Combining questions and prerequisites leads to a scheme for appropriate timing of responsible inclusion of pregnant women in drug research. Accordingly, we explore several research design options for including pregnant women in drug trials that are feasible within the framework. Ultimately, the framework may lead to (i) earlier inclusion of pregnant women in drug development, (ii) ensuring that key prerequisites, such as proper dosing, are addressed before more substantial numbers of pregnant women are included in trials, and (iii) optimal use of safety and efficacy data from the initial (nonpregnant) population throughout the drug development process.


PLOS ONE | 2018

Random measurement error: Why worry? An example of cardiovascular risk factors

Timo B. Brakenhoff; Maarten van Smeden; Frank L.J. Visseren; Rolf H.H. Groenwold

With the increased use of data not originally recorded for research, such as routine care data (or ‘big data’), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

Collaboration


Dive into the Maarten van Smeden's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nandini Dendukuri

McGill University Health Centre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge