Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lotty Hooft is active.

Publication


Featured researches published by Lotty Hooft.


Clinical Chemistry | 2015

STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; Jeroen G. Lijmer; David Moher; Drummond Rennie; Henrica C.W. de Vet; Herbert Y. Kressel; Nader Rifai; Robert M. Golub; Douglas G. Altman; Lotty Hooft; Daniël A. Korevaar; Jérémie F. Cohen

Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.


Radiology | 2015

STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; Jeroen G. Lijmer; David Moher; Drummond Rennie; Henrica C.W. de Vet; Herbert Y. Kressel; Nader Rifai; Robert M. Golub; Douglas G. Altman; Lotty Hooft; Daniël A. Korevaar; Jérémie F. Cohen

Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.


Allergy | 2012

Towards global consensus on outcome measures for atopic eczema research: results of the HOME II meeting

Jochen Schmitt; Phyllis I. Spuls; Maarten Boers; Kim S Thomas; Joanne R. Chalmers; Evelien Roekevisch; M.E. Schram; Richard Allsopp; Valeria Aoki; Christian Apfelbacher; Carla A.F.M. Bruijnzeel-Koomen; Marjolein S. de Bruin-Weller; Carolyn R. Charman; Arnon D. Cohen; Magdalene A. Dohil; Carsten Flohr; Masutaka Furue; Uwe Gieler; Lotty Hooft; Rosemary Humphreys; Henrique Akira Ishii; Ichiro Katayama; Willem Kouwenhoven; Sinéad M. Langan; Sue Lewis-Jones; Stephanie Merhand; Hiroyuki Murota; Dédée F. Murrell; Helen Nankervis; Yukihiro Ohya

The use of nonstandardized and inadequately validated outcome measures in atopic eczema trials is a major obstacle to practising evidence‐based dermatology. The Harmonising Outcome Measures for Eczema (HOME) initiative is an international multiprofessional group dedicated to atopic eczema outcomes research. In June 2011, the HOME initiative conducted a consensus study involving 43 individuals from 10 countries, representing different stakeholders (patients, clinicians, methodologists, pharmaceutical industry) to determine core outcome domains for atopic eczema trials, to define quality criteria for atopic eczema outcome measures and to prioritize topics for atopic eczema outcomes research. Delegates were given evidence‐based information, followed by structured group discussion and anonymous consensus voting. Consensus was achieved to include clinical signs, symptoms, long‐term control of flares and quality of life into the core set of outcome domains for atopic eczema trials. The HOME initiative strongly recommends including and reporting these core outcome domains as primary or secondary endpoints in all future atopic eczema trials. Measures of these core outcome domains need to be valid, sensitive to change and feasible. Prioritized topics of the HOME initiative are the identification/development of the most appropriate instruments for the four core outcome domains. HOME is open to anyone with an interest in atopic eczema outcomes research.


Canadian Medical Association Journal | 2013

Variation of a test’s sensitivity and specificity with disease prevalence

Mariska M.G. Leeflang; Anne Wilhelmina Saskia Rutjes; Johannes B. Reitsma; Lotty Hooft; Patrick M. Bossuyt

Background: Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. Methods: We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Results: Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. Interpretation: The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.


BMJ | 2016

Prediction models for cardiovascular disease risk in the general population: systematic review

Johanna A A G Damen; Lotty Hooft; Ewoud Schuit; Thomas P. A. Debray; Gary S. Collins; Ioanna Tzoulaki; Camille Lassale; George C.M. Siontis; Virginia Chiocchia; Corran Roberts; Michael Maia Schlüssel; Stephen Gerry; James A Black; Pauline Heus; Yvonne T. van der Schouw; Linda M. Peelen; Karel G.M. Moons

Objective To provide an overview of prediction models for risk of cardiovascular disease (CVD) in the general population. Design Systematic review. Data sources Medline and Embase until June 2013. Eligibility criteria for study selection Studies describing the development or external validation of a multivariable model for predicting CVD risk in the general population. Results 9965 references were screened, of which 212 articles were included in the review, describing the development of 363 prediction models and 473 external validations. Most models were developed in Europe (n=167, 46%), predicted risk of fatal or non-fatal coronary heart disease (n=118, 33%) over a 10 year period (n=209, 58%). The most common predictors were smoking (n=325, 90%) and age (n=321, 88%), and most models were sex specific (n=250, 69%). Substantial heterogeneity in predictor and outcome definitions was observed between models, and important clinical and methodological information were often missing. The prediction horizon was not specified for 49 models (13%), and for 92 (25%) crucial information was missing to enable the model to be used for individual risk prediction. Only 132 developed models (36%) were externally validated and only 70 (19%) by independent investigators. Model performance was heterogeneous and measures such as discrimination and calibration were reported for only 65% and 58% of the external validations, respectively. Conclusions There is an excess of models predicting incident CVD in the general population. The usefulness of most of the models remains unclear owing to methodological shortcomings, incomplete presentation, and lack of external validation and model impact studies. Rather than developing yet another similar CVD risk prediction model, in this era of large datasets, future research should focus on externally validating and comparing head-to-head promising CVD risk models that already exist, on tailoring or even combining these models to local settings, and investigating whether these models can be extended by addition of new predictors.


BMJ Open | 2016

STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration

Jérémie F. Cohen; Daniël A. Korevaar; Douglas G. Altman; David E. Bruns; Constantine Gatsonis; Lotty Hooft; Les Irwig; Deborah Levine; Johannes B. Reitsma; De Vet Hcw.; Bossuyt Pmm.

Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.


British Journal of General Practice | 2012

Barriers to GPs' use of evidence-based medicine: a systematic review

Sandra Zwolsman; Ellen te Pas; Lotty Hooft; Margreet Wieringa-de Waard; Nynke van Dijk

BACKGROUND GPs report various barriers to the use and practice of evidence-based medicine (EBM). A review of research on these barriers may help solve problems regarding the uptake of evidence in clinical outpatient practice. AIM To determine the barriers encountered by GPs in the practice of EBM and to come up with solutions to the barriers identified. DESIGN A systematic review of the literature. METHOD The following databases were searched: MEDLINE (PubMed), Embase, CINAHL, ERIC, and the Cochrane Library, until February 2011. Primary studies (all methods, all languages) that explore the barriers that GPs encounter in the practice of EBM were included. RESULTS A total of 14 700 articles were identified, of which 22 fulfilled all inclusion criteria. Of the latter, nine concerned qualitative, 12 concerned quantitative, and one concerned both qualitative and quantitative research methods. The barriers described in the articles cover the categories: evidence (including the accompanying EBM steps), the GPs preferences (experience, expertise, education), and the patients preferences. The particular GP setting also has important barriers to the use of EBM. Barriers found in this review, among others, include lack of time, EBM skills, and available evidence; patient-related factors; and the attitude of the GP. CONCLUSION Various barriers are encountered when using EBM in GP practice. Interventions that help GPs to overcome these barriers are needed, both within EBM education and in clinical practice.


European Urology | 2017

Comparing Three Different Techniques for Magnetic Resonance Imaging-targeted Prostate Biopsies: A Systematic Review of In-bore versus Magnetic Resonance Imaging-transrectal Ultrasound fusion versus Cognitive Registration. Is There a Preferred Technique?

O. Wegelin; Harm H.E. van Melick; Lotty Hooft; J.L.H. Ruud Bosch; Hans Reitsma; Jelle O. Barentsz; Diederik M. Somford

CONTEXT The introduction of magnetic resonance imaging-guided biopsies (MRI-GB) has changed the paradigm concerning prostate biopsies. Three techniques of MRI-GB are available: (1) in-bore MRI target biopsy (MRI-TB), (2) MRI-transrectal ultrasound fusion (FUS-TB), and (3) cognitive registration (COG-TB). OBJECTIVE To evaluate whether MRI-GB has increased detection rates of (clinically significant) prostate cancer (PCa) compared with transrectal ultrasound-guided biopsy (TRUS-GB) in patients at risk for PCa, and which technique of MRI-GB has the highest detection rate of (clinically significant) PCa. EVIDENCE ACQUISITION We performed a literature search in PubMed, Embase, and CENTRAL databases. Studies were evaluated using the Quality Assessment of Diagnostic Accuracy Studies-2 checklist and START recommendations. The initial search identified 2562 studies and 43 were included in the meta-analysis. EVIDENCE SYNTHESIS Among the included studies 11 used MRI-TB, 17 used FUS-TB, 11 used COG-TB, and four used a combination of techniques. In 34 studies concurrent TRUS-GB was performed. There was no significant difference between MRI-GB (all techniques combined) and TRUS-GB for overall PCa detection (relative risk [RR] 0.97 [0.90-1.07]). MRI-GB had higher detection rates of clinically significant PCa (csPCa) compared with TRUS-GB (RR 1.16 [1.02-1.32]), and a lower yield of insignificant PCa (RR 0.47 [0.35-0.63]). There was a significant advantage (p = 0.02) of MRI-TB compared with COG-TB for overall PCa detection. For overall PCa detection there was no significant advantage of MRI-TB compared with FUS-TB (p=0.13), and neither for FUS-TB compared with COG-TB (p=0.11). For csPCa detection there was no significant advantage of any one technique of MRI-GB. The impact of lesion characteristics such as size and localisation could not be assessed. CONCLUSIONS MRI-GB had similar overall PCa detection rates compared with TRUS-GB, increased rates of csPCa, and decreased rates of insignificant PCa. MRI-TB has a superior overall PCa detection compared with COG-TB. FUS-TB and MRI-TB appear to have similar detection rates. Head-to-head comparisons of MRI-GB techniques are limited and are needed to confirm our findings. PATIENT SUMMARY Our review shows that magnetic resonance imaging-guided biopsy detects more clinically significant prostate cancer (PCa) and less insignificant PCa compared with systematic biopsy in men at risk for PCa.


Radiology | 2013

Overinterpretation and Misreporting of Diagnostic Accuracy Studies: Evidence of “Spin”

Eleanor A. Ochodo; Margriet C. de Haan; Johannes B. Reitsma; Lotty Hooft; Patrick M. Bossuyt; Mariska M.G. Leeflang

PURPOSE To estimate the frequency of distorted presentation and overinterpretation of results in diagnostic accuracy studies. MATERIALS AND METHODS MEDLINE was searched for diagnostic accuracy studies published between January and June 2010 in journals with an impact factor of 4 or higher. Articles included were primary studies of the accuracy of one or more tests in which the results were compared with a clinical reference standard. Two authors scored each article independently by using a pretested data-extraction form to identify actual overinterpretation and practices that facilitate overinterpretation, such as incomplete reporting of study methods or the use of inappropriate methods (potential overinterpretation). The frequency of overinterpretation was estimated in all studies and in a subgroup of imaging studies. RESULTS Of the 126 articles, 39 (31%; 95% confidence interval [CI]: 23, 39) contained a form of actual overinterpretation, including 29 (23%; 95% CI: 16, 30) with an overly optimistic abstract, 10 (8%; 96% CI: 3%, 13%) with a discrepancy between the study aim and conclusion, and eight with conclusions based on selected subgroups. In our analysis of potential overinterpretation, authors of 89% (95% CI: 83%, 94%) of the studies did not include a sample size calculation, 88% (95% CI: 82%, 94%) did not state a test hypothesis, and 57% (95% CI: 48%, 66%) did not report CIs of accuracy measurements. In 43% (95% CI: 34%, 52%) of studies, authors were unclear about the intended role of the test, and in 3% (95% CI: 0%, 6%) they used inappropriate statistical tests. A subgroup analysis of imaging studies showed 16 (30%; 95% CI: 17%, 43%) and 53 (100%; 95% CI: 92%, 100%) contained forms of actual and potential overinterpretation, respectively. CONCLUSION Overinterpretation and misreporting of results in diagnostic accuracy studies is frequent in journals with high impact factors. SUPPLEMENTAL MATERIAL http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12120527/-/DC1.


BMC Medical Research Methodology | 2014

Investigation of publication bias in meta-analyses of diagnostic test accuracy: a meta-epidemiological study

W. Annefloor van Enst; Eleanor A. Ochodo; Rob J. P. M. Scholten; Lotty Hooft; Mariska M.G. Leeflang

BackgroundThe validity of a meta-analysis can be understood better in light of the possible impact of publication bias. The majority of the methods to investigate publication bias in terms of small study-effects are developed for meta-analyses of intervention studies, leaving authors of diagnostic test accuracy (DTA) systematic reviews with limited guidance. The aim of this study was to evaluate if and how publication bias was assessed in meta-analyses of DTA, and to compare the results of various statistical methods used to assess publication bias.MethodsA systematic search was initiated to identify DTA reviews with a meta-analysis published between September 2011 and January 2012. We extracted all information about publication bias from the reviews and the two-by-two tables. Existing statistical methods for the detection of publication bias were applied on data from the included studies.ResultsOut of 1,335 references, 114 reviews could be included. Publication bias was explicitly mentioned in 75 reviews (65.8%) and 47 of these had performed statistical methods to investigate publication bias in terms of small study-effects: 6 by drawing funnel plots, 16 by statistical testing and 25 by applying both methods. The applied tests were Egger’s test (n = 18), Deeks’ test (n = 12), Begg’s test (n = 5), both the Egger and Begg tests (n = 4), and other tests (n = 2). Our own comparison of the results of Begg’s, Egger’s and Deeks’ test for 92 meta-analyses indicated that up to 34% of the results did not correspond with one another.ConclusionsThe majority of DTA review authors mention or investigate publication bias. They mainly use suboptimal methods like the Begg and Egger tests that are not developed for DTA meta-analyses. Our comparison of the Begg, Egger and Deeks tests indicated that these tests do give different results and thus are not interchangeable. Deeks’ test is recommended for DTA meta-analyses and should be preferred.

Collaboration


Dive into the Lotty Hooft's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge