Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yemisi Takwoingi is active.

Publication


Featured researches published by Yemisi Takwoingi.


Allergy | 2014

The diagnosis of food allergy: a systematic review and meta-analysis

K. Soares-Weiser; Yemisi Takwoingi; Sukhmeet S Panesar; Antonella Muraro; Thomas Werfel; Karin Hoffmann-Sommergruber; Graham Roberts; Susanne Halken; Lars K. Poulsen; R. van Ree; B. J. Vlieg-Boerstra; Aziz Sheikh

We investigated the accuracy of tests used to diagnose food allergy.


Health Technology Assessment | 2010

A systematic review of positron emission tomography (PET) and positron emission tomography/computed tomography (PET/CT) for the diagnosis of breast cancer recurrence

Mary Pennant; Yemisi Takwoingi; L. Pennant; Clare Davenport; A Fry-Smith; Anne Eisinga; Lazaros Andronis; Theodoros N. Arvanitis; Jonathan J. Deeks; Chris Hyde

BACKGROUND Breast cancer (BC) accounts for one-third of all cases of cancer in women in the UK. Current strategies for the detection of BC recurrence include computed tomography (CT), magnetic resonance imaging (MRI) and bone scintigraphy. Positron emission tomography (PET) and, more recently, positron emission tomography/computed tomography (PET/CT) are technologies that have been shown to have increasing relevance in the detection and management of BC recurrence. OBJECTIVE To review the accuracy of PET and PET/CT for the diagnosis of BC recurrence by assessing their value compared with current practice and compared with each other. DATA SOURCES MEDLINE and EMBASE were searched from inception to May 2009. STUDY SELECTION Studies were included if investigations used PET or PET/CT to diagnose BC recurrence in patients with a history of BC and if the reference standard used to define the true disease status was histological diagnosis and/or long-term clinical follow-up. Studies were excluded if a non-standard PET or PET/CT technology was used, investigations were conducted for screening or staging of primary breast cancer, there was an inadequate or undefined reference standard, or raw data for calculation of diagnostic accuracy were not available. STUDY APPRAISAL Quality assessment and data extraction were performed independently by two reviewers. Direct and indirect comparisons were made between PET and PET/CT and between these technologies and methods of conventional imaging, and meta-analyses were carried out. Analysis was conducted separately on patient- and lesion-based data. Subgroup analysis was conducted to investigate variation in the accuracy of PET in certain populations or contexts and sensitivity analysis was conducted to examine the reliability of the primary outcome measures. RESULTS Of the 28 studies included in the review, 25 presented patient-based data and 7 presented lesion-based data for PET and 5 presented patient-based data and 1 presented patient- and lesion-based data for PET/CT; 16 studies conducted direct comparisons with 12 comparing the accuracy of PET or PET/CT with conventional diagnostic tests and 4 with MRI. For patient-based data (direct comparison) PET had significantly higher sensitivity [89%, 95% confidence interval (CI) 83% to 93% vs 79%, 95% CI 72% to 85%, relative sensitivity 1.12, 95% CI 1.04 to 1.21, p = 0.005] and significantly higher specificity (93%, 95% CI 83% to 97% vs 83%, 95% CI 67% to 92%, relative specificity 1.12, 95% CI 1.01 to 1.24, p = 0.036) compared with conventional imaging tests (CITs)--test performance did not appear to vary according to the type of CIT tested. For patient-based data (direct comparison) PET/CT had significantly higher sensitivity compared with CT (95%, 95% CI 88% to 98% vs 80%, 95% CI 65% to 90%, relative sensitivity 1.19, 95% CI 1.03 to 1.37, p = 0.015), but the increase in specificity was not significant (89%, 95% CI 69% to 97% vs 77%, 95% CI 50% to 92%, relative specificity 1.15, 95% CI 0.95 to 1.41, p = 0.157). For patient-based data (direct comparison) PET/CT had significantly higher sensitivity compared with PET (96%, 95% CI 90% to 98% vs 85%, 95% CI 77% to 91%, relative sensitivity 1.11, 95% CI 1.03 to 1.18, p = 0.006), but the increase in specificity was not significant (89%, 95% CI 74% to 96% vs 82%, 95% CI 64% to 92%, relative specificity 1.08, 95% CI 0.94 to 1.20, p = 0.267). For patient-based data there were no significant differences in the sensitivity or specificity of PET when compared with MRI, and, in the one lesion based study, there was no significant differences in the sensitivity or specificity of PET/CT when compared with MRI. LIMITATIONS Studies reviewed were generally small and retrospective and this may have limited the generalisability of findings. Subgroup analysis was conducted on the whole set of studies investigating PET and was not restricted to comparative studies. Conventional imaging studies that were not compared with PET or PET/CT were excluded from the review. CONCLUSIONS Available evidence suggests that for the detection of BC recurrence PET, in addition to conventional imaging techniques, may generally offer improved diagnostic accuracy compared with current standard practice. However, uncertainty remains around its use as a replacement for, rather than an add-on to, existing imaging technologies. In addition, PET/CT appeared to show clear advantage over CT and PET alone for the diagnosis of BC recurrence. FUTURE WORK Future research should include: prospective studies with patient populations clearly defined with regard to their clinical presentation; a study of diagnostic accuracy of PET/CT compared with conventional imaging techniques; a study of PET/CT compared with whole-body MRI; studies investigating the possibility of using PET/CT as a replacement for rather than an addition to CITs; and using modelling of the impact of PET/CT on patient outcomes to inform the possibility of conducting large-scale intervention trials.


Systematic Reviews | 2013

Cochrane diagnostic test accuracy reviews.

Mariska M.G. Leeflang; Jonathan J Deeks; Yemisi Takwoingi; Petra Macaskill

In 1996, shortly after the founding of The Cochrane Collaboration, leading figures in test evaluation research established a Methods Group to focus on the relatively new and rapidly evolving methods for the systematic review of studies of diagnostic tests. Seven years later, the Collaboration decided it was time to develop a publication format and methodology for Diagnostic Test Accuracy (DTA) reviews, as well as the software needed to implement these reviews in The Cochrane Library. A meeting hosted by the German Cochrane Centre in 2004 brought together key methodologists in the area, many of whom became closely involved in the subsequent development of the methodological framework for DTA reviews. DTA reviews first appeared in The Cochrane Library in 2008 and are now an integral part of the work of the Collaboration.


BMJ | 2012

Accuracy of single progesterone test to predict early pregnancy outcome in women with pain or bleeding: meta-analysis of cohort studies

J. Verhaegen; Ioannis D. Gallos; N.M. van Mello; M. Abdel-Aziz; Yemisi Takwoingi; Hoda M Harb; Jon Deeks; Ben Willem J. Mol; Arri Coomarasamy

Objective To determine the accuracy with which a single progesterone measurement in early pregnancy discriminates between viable and non-viable pregnancy. Design Systematic review and meta-analysis of diagnostic accuracy studies. Data sources Medline, Embase, CINAHL, Web of Science, ProQuest, Conference Proceedings Citation Index, and the Cochrane Library from inception until April 2012, plus reference lists of relevant studies. Study selection Studies were selected on the basis of participants (women with spontaneous pregnancy of less than 14 weeks of gestation); test (single serum progesterone measurement); outcome (viable intrauterine pregnancy, miscarriage, or ectopic pregnancy) diagnosed on the basis of combinations of pregnancy test, ultrasound scan, laparoscopy, and histological examination; design (cohort studies of test accuracy); and sufficient data being reported. Results 26 cohort studies, including 9436 pregnant women, were included, consisting of 7 studies in women with symptoms and inconclusive ultrasound assessment and 19 studies in women with symptoms alone. Among women with symptoms and inconclusive ultrasound assessments, the progesterone test (5 studies with 1998 participants and cut-off values from 3.2 to 6 ng/mL) predicted a non-viable pregnancy with pooled sensitivity of 74.6% (95% confidence interval 50.6% to 89.4%), specificity of 98.4% (90.9% to 99.7%), positive likelihood ratio of 45 (7.1 to 289), and negative likelihood ratio of 0.26 (0.12 to 0.57). The median prevalence of a non-viable pregnancy was 73.2%, and the probability of a non-viable pregnancy was raised to 99.2% if the progesterone was low. For women with symptoms alone, the progesterone test had a higher specificity when a threshold of 10 ng/mL was used (9 studies with 4689 participants) and predicted a non-viable pregnancy with pooled sensitivity of 66.5% (53.6% to 77.4%), specificity of 96.3% (91.1% to 98.5%), positive likelihood ratio of 18 (7.2 to 45), and negative likelihood ratio of 0.35 (0.24 to 0.50). The probability of a non-viable pregnancy was raised from 62.9% to 96.8%. Conclusion A single progesterone measurement for women in early pregnancy presenting with bleeding or pain and inconclusive ultrasound assessments can rule out a viable pregnancy.


Statistical Methods in Medical Research | 2017

Performance of methods for meta-analysis of diagnostic test accuracy with few studies or sparse data:

Yemisi Takwoingi; Boliang Guo; Richard D Riley; Jonathan J Deeks

Hierarchical models such as the bivariate and hierarchical summary receiver operating characteristic (HSROC) models are recommended for meta-analysis of test accuracy studies. These models are challenging to fit when there are few studies and/or sparse data (for example zero cells in contingency tables due to studies reporting 100% sensitivity or specificity); the models may not converge, or give unreliable parameter estimates. Using simulation, we investigated the performance of seven hierarchical models incorporating increasing simplifications in scenarios designed to replicate realistic situations for meta-analysis of test accuracy studies. Performance of the models was assessed in terms of estimability (percentage of meta-analyses that successfully converged and percentage where the between study correlation was estimable), bias, mean square error and coverage of the 95% confidence intervals. Our results indicate that simpler hierarchical models are valid in situations with few studies or sparse data. For synthesis of sensitivity and specificity, univariate random effects logistic regression models are appropriate when a bivariate model cannot be fitted. Alternatively, an HSROC model that assumes a symmetric SROC curve (by excluding the shape parameter) can be used if the HSROC model is the chosen meta-analytic approach. In the absence of heterogeneity, fixed effect equivalent of the models can be applied.


BMJ | 2016

When and how to update systematic reviews: consensus and checklist.

Paul Garner; Sally Hopewell; Jackie Chandler; Harriet MacLehose; H. J. Schünemann; Elie A. Akl; Joseph Beyene; Stephanie Chang; Rachel Churchill; K Dearness; G Guyatt; C Lefebvre; B Liles; Rachel Marshall; L Martínez García; Chris Mavergames; Mona Nasser; Amir Qaseem; Margaret Sampson; Karla Soares-Weiser; Yemisi Takwoingi; Lehana Thabane; Marialena Trivella; Peter Tugwell; Emma J Welsh; E Wilson

Updating of systematic reviews is generally more efficient than starting all over again when new evidence emerges, but to date there has been no clear guidance on how to do this. This guidance helps authors of systematic reviews, commissioners, and editors decide when to update a systematic review, and then how to go about updating the review.


JAMA | 2018

Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement

Matthew D. F. McInnes; David Moher; Brett D. Thombs; Trevor A. McGrath; Patrick M. Bossuyt; Tammy Clifford; Jérémie F. Cohen; Jonathan J Deeks; Constantine Gatsonis; Lotty Hooft; Harriet Hunt; Chris Hyde; Daniël A. Korevaar; Mariska M.G. Leeflang; Petra Macaskill; Johannes B. Reitsma; Rachel Rodin; Anne Ws Rutjes; Jean Paul Salameh; Adrienne Stevens; Yemisi Takwoingi; Marcello Tonelli; Laura Weeks; Penny F Whiting; Brian H. Willis

Importance Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.


Journal of Affective Disorders | 2015

Screening for bipolar spectrum disorders: A comprehensive meta-analysis of accuracy studies

André F. Carvalho; Yemisi Takwoingi; Paulo Marcelo Gondim Sales; Joanna K. Soczynska; Cristiano A. Köhler; Thiago H. Freitas; João Quevedo; Thomas Hyphantis; Roger S. McIntyre; Eduard Vieta

BACKGROUND Bipolar spectrum disorders are frequently under-recognized and/or misdiagnosed in various settings. Several influential publications recommend the routine screening of bipolar disorder. A systematic review and meta-analysis of accuracy studies for the bipolar spectrum diagnostic scale (BSDS), the hypomania checklist (HCL-32) and the mood disorder questionnaire (MDQ) were performed. METHODS The Pubmed, EMBASE, Cochrane, PsycINFO and SCOPUS databases were searched. Studies were included if the accuracy properties of the screening measures were determined against a DSM or ICD-10 structured diagnostic interview. The QUADAS-2 tool was used to rate bias. RESULTS Fifty three original studies met inclusion criteria (N=21,542). At recommended cutoffs, summary sensitivities were 81%, 66% and 69%, while specificities were 67%, 79% and 86% for the HCL-32, MDQ, and BSDS in psychiatric services, respectively. The HCL-32 was more accurate than the MDQ for the detection of type II bipolar disorder in mental health care centers (P=0.018). At a cutoff of 7, the MDQ had a summary sensitivity of 43% and a summary specificity of 95% for detection of bipolar disorder in primary care or general population settings. LIMITATIONS Most studies were performed in mental health care settings. Several included studies had a high risk of bias. CONCLUSIONS Although accuracy properties of the three screening instruments did not consistently differ in mental health care services, the HCL-32 was more accurate than the MDQ for the detection of type II BD. More studies in other settings (for example, in primary care) are necessary.


Journal of biometrics & biostatistics | 2014

Meta-Analysis of Test Accuracy Studies with Multiple and Missing Thresholds: A Multivariate-Normal Model

Richard D Riley; Yemisi Takwoingi; Thomas Trikalinos; Apratim Guha; Atanu Biswas; Joie Ensor; R. Katie Morris; Jonathan J. Deeks

Background: When meta-analysing studies examining the diagnostic/predictive accuracy of classifications based on a continuous test, each study may provide results for one or more thresholds, which can vary across studies. Researchers typically meta-analyse each threshold independently. We consider a multivariate meta-analysis to synthesise results for all thresholds simultaneously and account for their correlation. Methods: We assume that the logit sensitivity and logit specificity estimates follow a multivariate-normal distribution within studies. We model the true logit sensitivity (logit specificity) as monotonically decreasing (increasing) functions of the continuous threshold. This produces a summary ROC curve, a summary estimate of sensitivity and specificity for each threshold, and reveals the heterogeneity in test accuracy across studies. Application is made to 13 studies of protein:creatinine ratio (PCR) for detecting significant proteinuria in pregnancy that each report up to nine thresholds, with 23 distinct thresholds across studies. Results: In the example there were large within-study and between-study correlations, which were accounted for by the method. A cubic relationship on the logit scale was a better fit for the summary ROC curve than a linear or quadratic one. Between-study heterogeneity was substantial. Based on the summary ROC curve, a PCR value of 0.30 to 0.35 corresponded to maximal pair of summary sensitivity and specificity. Limitations of the proposed model include the need to posit parametric functions for the relationship of sensitivity and specificity with the threshold, to ensure correct ordering of summary threshold results, and the multivariate-normal approximation to the within-study sampling distribution. Conclusion: The joint analysis of test performance data reported over multiple thresholds is feasible. The proposed approach handles different sets of available thresholds per study, and produces a summary ROC curve and summary results for each threshold to inform decision-making.


BMJ | 2013

A multicomponent decision tool for prioritising the updating of systematic reviews

Yemisi Takwoingi; Sally Hopewell; David Tovey; Alex J. Sutton

There is no formal consensus on when to update a systematic review, and updating too frequently can be an inefficient use of resources and introduce bias. A multicomponent tool could help researchers decide when is best to update such reviews

Collaboration


Dive into the Yemisi Takwoingi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A King

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Burr

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar

Jon Deeks

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aachal Kotecha

UCL Institute of Ophthalmology

View shared research outputs
Top Co-Authors

Avatar

Andrew Elders

Glasgow Caledonian University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge