Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karel G.M. Moons is active.

Publication


Featured researches published by Karel G.M. Moons.


The New England Journal of Medicine | 2008

General and Abdominal Adiposity and Risk of Death in Europe

Tobias Pischon; Heiner Boeing; Kurt Hoffmann; M. Bergmann; Matthias B. Schulze; Kim Overvad; Y. T. van der Schouw; Elizabeth A Spencer; Karel G.M. Moons; Anne Tjønneland; Jytte Halkjær; Majken K. Jensen; Jakob Stegger; F. Clavel-Chapelon; M. C. Boutron-Ruault; Véronique Chajès; Jakob Linseisen; R. Kaaks; Antonia Trichopoulou; Dimitrios Trichopoulos; Christina Bamia; S. Sieri; Domenico Palli; R. Tumino; Paolo Vineis; Salvatore Panico; P.H.M. Peeters; Anne May; H. B. Bueno-de-Mesquita; F.J.B van Duijnhoven

BACKGROUND Previous studies have relied predominantly on the body-mass index (BMI, the weight in kilograms divided by the square of the height in meters) to assess the association of adiposity with the risk of death, but few have examined whether the distribution of body fat contributes to the prediction of death. METHODS We examined the association of BMI, waist circumference, and waist-to-hip ratio with the risk of death among 359,387 participants from nine countries in the European Prospective Investigation into Cancer and Nutrition (EPIC). We used a Cox regression analysis, with age as the time variable, and stratified the models according to study center and age at recruitment, with further adjustment for educational level, smoking status, alcohol consumption, physical activity, and height. RESULTS During a mean follow-up of 9.7 years, 14,723 participants died. The lowest risks of death related to BMI were observed at a BMI of 25.3 for men and 24.3 for women. After adjustment for BMI, waist circumference and waist-to-hip ratio were strongly associated with the risk of death. Relative risks among men and women in the highest quintile of waist circumference were 2.05 (95% confidence interval [CI], 1.80 to 2.33) and 1.78 (95% CI, 1.56 to 2.04), respectively, and in the highest quintile of waist-to-hip ratio, the relative risks were 1.68 (95% CI, 1.53 to 1.84) and 1.51 (95% CI, 1.37 to 1.66), respectively. BMI remained significantly associated with the risk of death in models that included waist circumference or waist-to-hip ratio (P<0.001). CONCLUSIONS These data suggest that both general adiposity and abdominal adiposity are associated with the risk of death and support the use of waist circumference or waist-to-hip ratio in addition to BMI in assessing the risk of death.


BMJ | 2009

Prognosis and prognostic research: validating a prognostic model.

Douglas G. Altman; Yvonne Vergouwe; Patrick Royston; Karel G.M. Moons

Prognostic models are of little clinical value unless they are shown to work in other samples. Douglas Altman and colleagues describe how to validate models and discuss some of the problems


BMJ | 2009

Prognosis and prognostic research: Developing a prognostic model

Patrick Royston; Karel G.M. Moons; Douglas G. Altman; Yvonne Vergouwe

In the second article in their series, Patrick Royston and colleagues describe different approaches to building clinical prognostic models


BMJ | 2009

Prognosis and prognostic research: what, why, and how?

Karel G.M. Moons; Patrick Royston; Yvonne Vergouwe; Diederick E. Grobbee; Douglas G. Altman

Doctors have little specific research to draw on when predicting outcome. In this first article in a series Karel Moons and colleagues explain why research into prognosis is important and how to design such research


Annals of Internal Medicine | 2015

Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration.

Karel G.M. Moons; Douglas G. Altman; Johannes B. Reitsma; John P. A. Ioannidis; Petra Macaskill; Ewout W. Steyerberg; Andrew J. Vickers; David F. Ransohoff; Gary S. Collins

In medicine, numerous decisions are made by care providers, often in shared decision making, on the basis of an estimated probability that a specific disease or condition is present (diagnostic setting) or a specific event will occur in the future (prognostic setting) in an individual. In the diagnostic setting, the probability that a particular disease is present can be used, for example, to inform the referral of patients for further testing, to initiate treatment directly, or to reassure patients that a serious cause for their symptoms is unlikely. In the prognostic context, predictions can be used for planning lifestyle or therapeutic decisions on the basis of the risk for developing a particular outcome or state of health within a specific period (13). Such estimates of risk can also be used to risk-stratify participants in therapeutic intervention trials (47). In both the diagnostic and prognostic setting, probability estimates are commonly based on combining information from multiple predictors observed or measured from an individual (1, 2, 810). Information from a single predictor is often insufficient to provide reliable estimates of diagnostic or prognostic probabilities or risks (8, 11). In virtually all medical domains, diagnostic and prognostic multivariable (risk) prediction models are being developed, validated, updated, and implemented with the aim to assist doctors and individuals in estimating probabilities and potentially influence their decision making. A multivariable prediction model is a mathematical equation that relates multiple predictors for a particular individual to the probability of or risk for the presence (diagnosis) or future occurrence (prognosis) of a particular outcome (10, 12). Other names for a prediction model include risk prediction model, predictive model, prognostic (or prediction) index or rule, and risk score (9). Predictors are also referred to as covariates, risk indicators, prognostic factors, determinants, test results, ormore statisticallyindependent variables. They may range from demographic characteristics (for example, age and sex), medical historytaking, and physical examination results to results from imaging, electrophysiology, blood and urine measurements, pathologic examinations, and disease stages or characteristics, or results from genomics, proteomics, transcriptomics, pharmacogenomics, metabolomics, and other new biological measurement platforms that continuously emerge. Diagnostic and Prognostic Prediction Models Multivariable prediction models fall into 2 broad categories: diagnostic and prognostic prediction models (Box A). In a diagnostic model, multiplethat is, 2 or morepredictors (often referred to as diagnostic test results) are combined to estimate the probability that a certain condition or disease is present (or absent) at the moment of prediction (Box B). They are developed from and to be used for individuals suspected of having that condition. Box A. Schematic representation of diagnostic and prognostic prediction modeling studies. The nature of the prediction in diagnosis is estimating the probability that a specific outcome or disease is present (or absent) within an individual, at this point in timethat is, the moment of prediction (T= 0). In prognosis, the prediction is about whether an individual will experience a specific event or outcome within a certain time period. In other words, in diagnostic prediction the interest is in principle a cross-sectional relationship, whereas prognostic prediction involves a longitudinal relationship. Nevertheless, in diagnostic modeling studies, for logistical reasons, a time window between predictor (index test) measurement and the reference standard is often necessary. Ideally, this interval should be as short as possible without starting any treatment within this period. Box B. Similarities and differences between diagnostic and prognostic prediction models. In a prognostic model, multiple predictors are combined to estimate the probability of a particular outcome or event (for example, mortality, disease recurrence, complication, or therapy response) occurring in a certain period in the future. This period may range from hours (for example, predicting postoperative complications [13]) to weeks or months (for example, predicting 30-day mortality after cardiac surgery [14]) or years (for example, predicting the 5-year risk for developing type 2 diabetes [15]). Prognostic models are developed and are to be used in individuals at risk for developing that outcome. They may be models for either ill or healthy individuals. For example, prognostic models include models to predict recurrence, complications, or death in a certain period after being diagnosed with a particular disease. But they may also include models for predicting the occurrence of an outcome in a certain period in individuals without a specific disease: for example, models to predict the risk for developing type 2 diabetes (16) or cardiovascular events in middle-aged nondiseased individuals (17), or the risk for preeclampsia in pregnant women (18). We thus use prognostic in the broad sense, referring to the prediction of an outcome in the future in individuals at risk for that outcome, rather than the narrower definition of predicting the course of patients who have a particular disease with or without treatment (1). The main difference between a diagnostic and prognostic prediction model is the concept of time. Diagnostic modeling studies are usually cross-sectional, whereas prognostic modeling studies are usually longitudinal. In this document, we refer to both diagnostic and prognostic prediction models as prediction models, highlighting issues that are specific to either type of model. Development, Validation, and Updating of Prediction Models Prediction model studies may address the development of a new prediction model (10), a model evaluation (often referred to as model validation) with or without updating of the model [1921]), or a combination of these (Box C and Figure 1). Box C. Types of prediction model studies. Figure 1. Types of prediction model studies covered by the TRIPOD statement. D = development data; V = validation data. Model development studies aim to derive a prediction model by selecting predictors and combining them into a multivariable model. Logistic regression is commonly used for cross-sectional (diagnostic) and short-term (for example 30-day mortality) prognostic outcomes and Cox regression for long-term (for example, 10-year risk) prognostic outcomes. Studies may also focus on quantifying the incremental or added predictive value of a specific predictor (for example, newly discovered) (22) to a prediction model. Quantifying the predictive ability of a model on the same data from which the model was developed (often referred to as apparent performance [Figure 1]) will tend to give an optimistic estimate of performance, owing to overfitting (too few outcome events relative to the number of candidate predictors) and the use of predictor selection strategies (2325). Studies developing new prediction models should therefore always include some form of internal validation to quantify any optimism in the predictive performance (for example, calibration and discrimination) of the developed model and adjust the model for overfitting. Internal validation techniques use only the original study sample and include such methods as bootstrapping or cross-validation. Internal validation is a necessary part of model development (2). After developing a prediction model, it is strongly recommended to evaluate the performance of the model in other participant data than was used for the model development. External validation (Box C and Figure 1) (20, 26) requires that for each individual in the new participant data set, outcome predictions are made using the original model (that is, the published model or regression formula) and compared with the observed outcomes. External validation may use participant data collected by the same investigators, typically using the same predictor and outcome definitions and measurements, but sampled from a later period (temporal or narrow validation); by other investigators in another hospital or country (though disappointingly rare [27]), sometimes using different definitions and measurements (geographic or broad validation); in similar participants, but from an intentionally different setting (for example, a model developed in secondary care and assessed in similar participants, but selected from primary care); or even in other types of participants (for example, model developed in adults and assessed in children, or developed for predicting fatal events and assessed for predicting nonfatal events) (19, 20, 26, 2830). In case of poor performance (for example, systematic miscalibration), when evaluated in an external validation data set, the model can be updated or adjusted (for example, recalibrating or adding a new predictor) on the basis of the validation data set (Box C) (2, 20, 21, 31). Randomly splitting a single data set into model development and model validation data sets is frequently done to develop and validate a prediction model; this is often, yet erroneously, believed to be a form of external validation. However, this approach is a weak and inefficient form of internal validation, because not all available data are used to develop the model (23, 32). If the available development data set is sufficiently large, splitting by time and developing a model using data from one period and evaluating its performance using the data from the other period (temporal validation) is a stronger approach. With a single data set, temporal splitting and model validation can be considered intermediate between internal and external validation. Incomplete and Inaccurate Reporting Prediction models are becoming increasingly abundant in the medical literature (9, 33, 34), and policymakers are incre


BMJ | 2009

Prognosis and prognostic research: application and impact of prognostic models in clinical practice

Karel G.M. Moons; Douglas G. Altman; Yvonne Vergouwe; Patrick Royston

An accurate prognostic model is of no benefit if it is not generalisable or doesn’t change behaviour. In the last article in their series Karel Moons and colleagues discuss how to determine the practical value of models


European Urology | 2015

Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement.

Gary S. Collins; Johannes B. Reitsma; Douglas G. Altman; Karel G.M. Moons

CONTEXT Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. OBJECTIVE The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. EVIDENCE ACQUISITION This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. EVIDENCE SYNTHESIS The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. CONCLUSIONS To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PATIENT SUMMARY The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes.


JAMA | 2012

Common Carotid Intima-Media Thickness Measurements in Cardiovascular Risk Prediction: A Meta-analysis

Hester M. den Ruijter; Sanne A.E. Peters; Todd J. Anderson; Annie Britton; Jacqueline M. Dekker; Marinus J.C. Eijkemans; Gunnar Engström; Gregory W. Evans; Jacqueline de Graaf; Diederick E. Grobbee; Bo Hedblad; Albert Hofman; Suzanne Holewijn; Ai Ikeda; Maryam Kavousi; Kazuo Kitagawa; Akihiko Kitamura; Hendrik Koffijberg; Eva Lonn; Matthias W. Lorenz; Ellisiv B. Mathiesen; G. Nijpels; Shuhei Okazaki; Daniel H. O'Leary; Joseph F. Polak; Jackie F. Price; Christine Robertson; Christopher M. Rembold; Maria Rosvall; Tatjana Rundek

CONTEXT The evidence that measurement of the common carotid intima-media thickness (CIMT) improves the risk scores in prediction of the absolute risk of cardiovascular events is inconsistent. OBJECTIVE To determine whether common CIMT has added value in 10-year risk prediction of first-time myocardial infarctions or strokes, above that of the Framingham Risk Score. DATA SOURCES Relevant studies were identified through literature searches of databases (PubMed from 1950 to June 2012 and EMBASE from 1980 to June 2012) and expert opinion. STUDY SELECTION Studies were included if participants were drawn from the general population, common CIMT was measured at baseline, and individuals were followed up for first-time myocardial infarction or stroke. DATA EXTRACTION Individual data were combined into 1 data set and an individual participant data meta-analysis was performed on individuals without existing cardiovascular disease. RESULTS We included 14 population-based cohorts contributing data for 45,828 individuals. During a median follow-up of 11 years, 4007 first-time myocardial infarctions or strokes occurred. We first refitted the risk factors of the Framingham Risk Score and then extended the model with common CIMT measurements to estimate the absolute 10-year risks to develop a first-time myocardial infarction or stroke in both models. The C statistic of both models was similar (0.757; 95% CI, 0.749-0.764; and 0.759; 95% CI, 0.752-0.766). The net reclassification improvement with the addition of common CIMT was small (0.8%; 95% CI, 0.1%-1.6%). In those at intermediate risk, the net reclassification improvement was 3.6% in all individuals (95% CI, 2.7%-4.6%) and no differences between men and women. CONCLUSION The addition of common CIMT measurements to the Framingham Risk Score was associated with small improvement in 10-year risk prediction of first-time myocardial infarction or stroke, but this improvement is unlikely to be of clinical importance.


Journal of Clinical Epidemiology | 2003

External validation is necessary in prediction research: A clinical example

Sacha E. Bleeker; Henriëtte A. Moll; Ewout W. Steyerberg; A.R.T. Donders; Gerarda Derksen-Lubsen; Diederick E. Grobbee; Karel G.M. Moons

BACKGROUND AND OBJECTIVES Prediction models tend to perform better on data on which the model was constructed than on new data. This difference in performance is an indication of the optimism in the apparent performance in the derivation set. For internal model validation, bootstrapping methods are recommended to provide bias-corrected estimates of model performance. Results are often accepted without sufficient regard to the importance of external validation. This report illustrates the limitations of internal validation to determine generalizability of a diagnostic prediction model to future settings. METHODS A prediction model for the presence of serious bacterial infections in children with fever without source was derived and validated internally using bootstrap resampling techniques. Subsequently, the model was validated externally. RESULTS In the derivation set (n=376), nine predictors were identified. The apparent area under the receiver operating characteristic curve (95% confidence interval) of the model was 0.83 (0.78-0.87) and 0.76 (0.67-0.85) after bootstrap correction. In the validation set (n=179) the performance was 0.57 (0.47-0.67). CONCLUSION For relatively small data sets, internal validation of prediction models by bootstrap techniques may not be sufficient and indicative for the models performance in future patients. External validation is essential before implementing prediction models in clinical practice.


Pain | 2003

Preoperative prediction of severe postoperative pain

C. J. Kalkman; K Visser; J Moen; G.J Bonsel; D. E. Grobbee; Karel G.M. Moons

&NA; We developed and validated a prediction rule for the occurrence of early postoperative severe pain in surgical inpatients, using predictors that can be easily documented in a preoperative setting. A cohort of surgical inpatients (n=1416) undergoing various procedures except cardiac surgery and intracranial neurosurgery in a University Hospital were studied. Preoperatively the following predictors were collected: age, gender, type of scheduled surgery, expected incision size, blood pressure, heart rate, Quetelet index, the presence and severity of preoperative pain, health‐related quality of life the (SF‐36), Spielbergers State–Trait Anxiety Inventory (STAI) and the Amsterdam Preoperative Anxiety and Information Scale (APAIS). The outcome was the presence of severe postoperative pain (defined as Numeric Rating Scale ≥8) within the first hour postoperatively. Multivariate logistic regression in combination with bootstrapping techniques (as a method for internal validation) was used to derive a stable prediction model. Independent predictors of severe postoperative pain were younger age, female gender, level of preoperative pain, incision size and type of surgery. The area under the receiver operator characteristic (ROC) curve was 0.71 (95% CI: 0.68–0.74). Adding APAIS scores (measures of preoperative anxiety and need for information), but not STAI, provided a slightly better model (ROC area 0.73). The reliability of this extended model was good (Hosmer and Lemeshow test p‐value 0.78). We have demonstrated that severe postoperative pain early after awakening from general anesthesia can be predicted with a scoring rule, using a small set of variables that can be easily obtained from all patients at the preoperative visit. Before this internally validated preoperative prediction rule can be applied in clinical practice to support anticipatory pain management, external validation in other clinical settings is necessary.

Collaboration


Dive into the Karel G.M. Moons's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yvonne Vergouwe

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge