Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aneesa Motala is active.

Publication


Featured researches published by Aneesa Motala.


Annals of Internal Medicine | 2014

Comparative Effectiveness of Pharmacologic Treatments to Prevent Fractures: An Updated Systematic Review

Carolyn J. Crandall; Sydne Newberry; Allison Diamant; Yee-Wei Lim; Marika Booth; Aneesa Motala; Paul G. Shekelle

Osteoporosis is a skeletal disorder characterized by compromised bone strength, increasing the risk for fracture (1). Risk factors include, but are not limited to, increasing age, female sex, postmenopause for women, low body weight, parental history of a hip fracture, cigarette smoking, race, hypogonadism, certain medical conditions (particularly rheumatoid arthritis), and certain medications for chronic diseases (such as glucocorticoids). During ones expected remaining life, 1 in 2 postmenopausal women and 1 in 5 older men are at risk for an osteoporosis-related fracture (2). The increasing prevalence and cost of osteoporosis have heightened interest in the effectiveness and safety of the many interventions currently available to prevent osteoporotic fracture. In 2007, we conducted a systematic review of the comparative effectiveness of treatments to prevent fractures in men and women with low bone density or osteoporosis (3, 4). Since that time, new drugs have been approved for treatment, and new studies have been published about existing drugs. Additional issues about pharmacologic treatments for osteoporosis that have become particularly salient include the optimal duration of therapy; the safety of long-term therapy; and the role of bone mineral density (BMD) measurement, both for screening and for monitoring treatment. Therefore, we updated our original systematic review. Methods This article is a condensed and further updated version of an evidence review conducted for the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Centers program (5). This article focuses on the comparative benefits and risks of short- and long-term pharmacologic treatments for low bone density. In addition, we address issues regarding monitoring and duration of therapy. For this updated review, we followed the same methods as our 2007 review, with a few exceptions. A protocol for this review was developed and posted on the Effective Health Care Program Web site (6). Data Sources and Searches We searched MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Database of Systematic Reviews, the ACP Journal Club database, the National Institute for Clinical Excellence, the Food and Drug Administrations (FDA) MedWatch database, and relevant pharmacologic databases from 2 January 2005 to 3 June 2011. The search strategy followed that of the original report, with the addition of terms for new FDA-approved drugs (such as denosumab) and newly reported adverse events. The full search strategies are in our evidence report (5). We later updated this search to 21 January 2013 and used a machine learning method that a previous study showed had high sensitivity for detecting relevant evidence for updating a search of the literature on osteoporosis treatments (7) and then updated the searches to 4 March 2014 using the full search strategy. Study Selection Eligible studies were systematic reviews and randomized, controlled trials (RCTs) that studied FDA-approved pharmacotherapy (excluding calcitonin and etidronate) for women or men with osteoporosis that was not due to a secondary cause (such as glucocorticoid therapy and androgen-deprivation therapy) and also measured fractures as an outcome at a minimum follow-up of 6 months. In addition, we included observational studies with more than 1000 participants for adverse events and case reports for rare events. As in our original review, only English-language studies were included. Data Extraction and Quality Assessment Reviews were done in duplicate by pairs of reviewers. Study characteristics were extracted in duplicate, and outcomes data (both benefits and harms) were extracted by the study statistician. Study quality was assessed as it was in the 2007 report using the Jadad scale for clinical trials (with several questions added to assess allocation concealment and other factors) and the NewcastleOttawa Scale for observational studies (8, 9). Systematic reviews were assessed using a modified version of the 11 AMSTAR (A Measurement Tool to Assess Systematic Reviews) criteria (the modifications included eliminating the requirements to list all of the excluded studies and assess the conflicts of interest for all of the included studies) (10). The assessments of efficacy and effectiveness used reduction in fracture (all, vertebral, nonvertebral, spine, hip, wrist, or other) as the outcome (studies reporting changes in BMD but not fracture were excluded). Data Synthesis and Analysis Evidence on efficacy and effectiveness was synthesized narratively. For adverse events, we pooled data as in the 2007 report: We compared agent versus placebo and agent versus agent for agents within the same class and across classes. For groups of events that occurred in 3 or more trials, we estimated the pooled odds ratio (OR) and its associated 95% CI. Because many events were rare, we used exact conditional inference to perform the pooling rather than applying the usual asymptotic methods that assume normality. StatXact PROCs software was used for the analysis (11, 12). Large cohort and casecontrol studies were included to assess adverse events. Strength of evidence was assessed using the criteria of the Agency for Healthcare Research and Quality Evidence-based Practice Centers program, which are similar to those proposed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group (13). Role of the Funding Source The update that included studies identified in the 3 June 2011 search was funded by AHRQ. Subsequent updating received no external funding. Although AHRQ formulated the initial study questions for the original report, it did not participate in the literature search, determination of study eligibility criteria, data analysis, or interpretation of the data. Staff from AHRQ reviewed and provided comments on the report. Results The first search yielded 26366 titles, 2440 of which were considered potentially relevant (Figure). Of these, 661 full-text articles were reviewed, resulting in 255 articles that were included in the update report. Of these, 174 articles were relevant to this article. The second update search plus hand searching initially yielded 16589 titles, and machine learning and full-text review identified 107 as relevant. The third update yielded 12131 titles. After title, abstract, and full-text screening, 34 were relevant. Thus, 55086 titles were screened and 315 articles met eligibility criteria for inclusion. Not every eligible study is cited in this article. A complete list of studies that met eligibility criteria is available at www.rand.org/health/centers/epc. Figure. Summary of evidence search and selection. FRAX = Fracture Risk Assessment Tool; HRT = hormone replacement therapy; LBD = low bone density. *Original LBD report (4). Fracture Prevention Our previous review (3) identified 76 randomized trials and 24 meta-analyses and concluded that there was good-quality evidence that alendronate, etidronate, ibandronate, risedronate, zoledronic acid, estrogen, parathyroid hormone, and raloxifene prevented osteoporotic fractures, although not all of these agents prevented hip fractures. The principal new efficacy findings since that time are additional data about zoledronic acid and data about a new agent, denosumab (Tables 1 and 2). The data for zoledronic acid came from 6 placebo-controlled studies of various doses in postmenopausal women (1419), the 2 largest of which enrolled 7230 women (15) and 2127 women (14). Both studies showed statistically significant reductions in nearly all types of fractures assessed, with relative risk reductions ranging from 0.23 to 0.73 at time points from 24 to 36 months after initiation of treatment. The data for denosumab came from 2 placebo-controlled trials in postmenopausal women, one small (332 enrolled women) (20) and one much larger that followed 7521 women for 36 months (21). This latter study found statistically significant reductions in each anatomical fracture type measured (hip, nonvertebral, vertebral, and new clinical vertebral), with hazard ratios of 0.31 to 0.80. Many secondary analyses and open-label extension results of this trial report the effectiveness of denosumab in various subpopulations and other circumstances (2228). Table 1. Principal Conclusions About Drug Efficacy/Effectiveness and Adverse Events Table 2. Principal Conclusions About Monitoring and Treatment Duration Despite some difficulties in comparing results across trials because of differences in the outcomes reported, high-strength evidence shows that bisphosphonates (alendronate, ibandronate, risedronate, and zoledronic acid), denosumab, and teriparatide (the 1,34 amino acid fragment of the parathyroid hormone) reduce fractures compared with placebo in postmenopausal women with osteoporosis, with relative risks for fractures generally in the range of 0.40 to 0.60 for vertebral fractures and 0.60 to 0.80 for nonvertebral fractures. This range translates into a number needed to treat of 60 to 89 to prevent 1 vertebral fracture and 50 to 67 to prevent 1 hip fracture over 1 to 3 years of treatment, using a pooled average of the incidence of these fractures in the placebo groups from included studies. The effect of ibandronate on hip fracture risk reduction is unclear because hip fracture was not a separately reported outcome in placebo-controlled trials of this agent. The selective estrogen receptor modulator raloxifene has been shown in placebo-controlled trials to reduce only vertebral fractures; reduction in the risk for hip or nonvertebral fractures was not statistically significant. There is only one randomized, controlled trial of men with osteoporosis that was designed with a primary fracture reduction outcome. Nearly 1200 men with osteoporosis were randomly assigned to placebo or zoledronic acid intravenously once per year for 2 years. At follow-up, 1.6% of treated men had new radio


BMJ Quality & Safety | 2015

Development of the Quality Improvement Minimum Quality Criteria Set (QI-MQCS): a tool for critical appraisal of quality improvement intervention publications

Susanne Hempel; Paul G. Shekelle; Jodi L Liu; Margie Sherwood Danz; Robbie Foy; Yee-Wei Lim; Aneesa Motala; Lisa V. Rubenstein

Objective Valid, reliable critical appraisal tools advance quality improvement (QI) intervention impacts by helping stakeholders identify higher quality studies. QI approaches are diverse and differ from clinical interventions. Widely used critical appraisal instruments do not take unique QI features into account and existing QI tools (eg, Standards for QI Reporting Excellence) are intended for publication guidance rather than critical appraisal. This study developed and psychometrically tested a critical appraisal instrument, the QI Minimum Quality Criteria Set (QI-MQCS) for assessing QI-specific features of QI publications. Methods Approaches to developing the tool and ensuring validity included a literature review, in-person and online survey expert panel input, and application to empirical examples. We investigated psychometric properties in a set of diverse QI publications (N=54) by analysing reliability measures and item endorsement rates and explored sources of disagreement between reviewers. Results The QI-MQCS includes 16 content domains to evaluate QI intervention publications: Organisational Motivation, Intervention Rationale, Intervention Description, Organisational Characteristics, Implementation, Study Design, Comparator Description, Data Sources, Timing, Adherence/Fidelity, Health Outcomes, Organisational Readiness, Penetration/Reach, Sustainability, Spread and Limitations. Median inter-rater agreement for QI-MQCS items was κ 0.57 (83% agreement). Item statistics indicated sufficient ability to differentiate between publications (median quality criteria met 67%). Internal consistency measures indicated coherence without excessive conceptual overlap (absolute mean interitem correlation=0.19). The critical appraisal instrument is accompanied by a user manual detailing What to consider, Where to look and How to rate. Conclusions We developed a ready-to-use, valid and reliable critical appraisal instrument applicable to healthcare QI intervention publications, but recognise scope for continuing refinement.


Systematic Reviews | 2014

Assessment of a method to detect signals for updating systematic reviews

Paul G. Shekelle; Aneesa Motala; Breanne Johnsen; Sydne Newberry

BackgroundSystematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed.MethodsThe AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority.ResultsThe 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa = 0.74.ConclusionsThese results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating.


Medical Decision Making | 2013

A Pilot Study Using Machine Learning and Domain Knowledge To Facilitate Comparative Effectiveness Review Updating

Siddhartha R Dalal; Paul G. Shekelle; Susanne Hempel; Sydne Newberry; Aneesa Motala; Kanaka D Shetty

Background. Comparative effectiveness and systematic reviews require frequent and time-consuming updating. Results of earlier screening should be useful in reducing the effort needed to screen relevant articles. Methods. We collected 16,707 PubMed citation classification decisions from 2 comparative effectiveness reviews: interventions to prevent fractures in low bone density (LBD) and off-label uses of atypical antipsychotic drugs (AAP). We used previously written search strategies to guide extraction of a limited number of explanatory variables pertaining to the intervention, outcome, and study design. We empirically derived statistical models (based on a sparse generalized linear model with convex penalties [GLMnet] and a gradient boosting machine [GBM]) that predicted article relevance. We evaluated model sensitivity, positive predictive value (PPV), and screening workload reductions using 11,003 PubMed citations retrieved for the LBD and AAP updates. Results. GLMnet-based models performed slightly better than GBM-based models. When attempting to maximize sensitivity for all relevant articles, GLMnet-based models achieved high sensitivities (0.99 and 1.0 for AAP and LBD, respectively) while reducing projected screening by 55.4% and 63.2%. The GLMnet-based model yielded sensitivities of 0.921 and 0.905 and PPVs of 0.185 and 0.102 when predicting articles relevant to the AAP and LBD efficacy/effectiveness analyses, respectively (using a threshold of P ≥ 0.02). GLMnet performed better when identifying adverse effect relevant articles for the AAP review (sensitivity = 0.981) than for the LBD review (0.685). The system currently requires MEDLINE-indexed articles. Conclusions. We evaluated statistical classifiers that used previous classification decisions and explanatory variables derived from MEDLINE indexing terms to predict inclusion decisions. This pilot system reduced workload associated with screening 2 simulated comparative effectiveness review updates by more than 50% with minimal loss of relevant articles.


BMJ Quality & Safety | 2013

Incorporating evidence review into quality improvement: meeting the needs of innovators

Margie Sherwood Danz; Susanne Hempel; Yee-Wei Lim; Roberta Shanman; Aneesa Motala; Susan Stockdale; Paul G. Shekelle; Lisa V. Rubenstein

Background Achieving quality improvement (QI) aims often requires local innovation. Without objective evidence review, innovators may miss previously tested approaches, rely on biased information, or use personal preferences in designing and implementing local QI programmes. Aim To develop a practical, responsive approach to evidence review for QI innovations aimed at both achieving the goals of the Patient Centered Medical Home (PCMH) and developing an evidence-based QI culture. Design Descriptive organisational case report. Methods As part of a QI initiative to develop and spread innovations for achieving the Veterans Affairs (VA) PCMH (termed Patient Aligned Care Team, or PACT), we involved a professional evidence review team (consisting of review experts, an experienced librarian, and administrative support) in responding to the evidence needs of front-line primary care innovators. The review team developed a systematic approach to responsive innovation evidence review (RIER) that focused on innovator needs in terms of time frame, type of evidence and method of communicating results. To assess uptake and usefulness of the RIERs, and to learn how the content and process could be improved, we surveyed innovation leaders. Results In the first 16 months of the QI initiative, we produced 13 RIERs on a variety of topics. These were presented as 6–15-page summaries and as slides at a QI collaborative. The RIERs focused on innovator needs (eg, topic overviews, how innovations are carried out, or contextual factors relevant to implementation). All 17 innovators who responded to the survey had read at least one RIER; 50% rated the reviews as very useful and 31%, as probably useful. Conclusions These responsive evidence reviews appear to be a promising approach to integrating evidence review into QI processes.


Journal of Addiction Medicine | 2017

Mindfulness-based Relapse Prevention for Substance Use Disorders: A Systematic Review and Meta-analysis

Sean Grant; Benjamin Colaiaco; Aneesa Motala; Roberta Shanman; Marika Booth; Melony E. Sorbero; Susanne Hempel

Objectives: Substance use disorder (SUD) is a prevalent health issue with serious personal and societal consequences. This review aims to estimate the effects and safety of Mindfulness-based Relapse Prevention (MBRP) for SUDs. Methods: We searched electronic databases for randomized controlled trials evaluating MBRP for adult patients diagnosed with SUDs. Two reviewers independently assessed citations, extracted trial data, and assessed risks of bias. We conducted random-effects meta-analyses and assessed quality of the body of evidence (QoE) using the Grading of Recommendations Assessment, Development, and Evaluation approach. Results: We identified 9 randomized controlled trials comprising 901 participants. We did not detect statistically significant differences between MBRP and comparators on relapse (odds ratio [OR] 0.72, 95% confidence interval [CI] 0.46–1.13, low QoE), frequency of use (standardized mean difference [SMD] 0.02, 95% CI −0.40 to 0.44, low QoE), treatment dropout (OR 0.81, 95% CI 0.40 to 1.62, very low QoE), depressive symptoms (SMD −0.09, 95% CI −0.39 to 0.21, low QoE), anxiety symptoms (SMD −0.32, 95% CI −1.16 to 0.52, very low QoE), and mindfulness (SMD −0.28, 95% CI −0.72 to 0.16, very low QoE). We identified significant differences in favor of MBRP on withdrawal/craving symptoms (SMD −0.13, 95% CI −0.19 to −0.08, I2 = 0%, low QoE) and negative consequences of substance use (SMD −0.23, 95% CI −0.39 to −0.07, I2 = 0%, low QoE). We found negligible evidence of adverse events. Conclusions: We have limited confidence in estimates suggesting MBRP yields small effects on withdrawal/craving and negative consequences versus comparator interventions. We did not detect differences for any other outcome. Future trials should aim to minimize participant attrition to improve confidence in effect estimates.


Drug and Alcohol Dependence | 2016

Acupuncture for substance use disorders: A systematic review and meta-analysis

Sean Grant; Ryan Kandrack; Aneesa Motala; Roberta Shanman; Marika Booth; Jeffrey Miles; Melony E. Sorbero; Susanne Hempel

BACKGROUND This systematic review aims to estimate the effects of acupuncture for adults with substance use disorders (SUDs). METHODS We searched 7 electronic databases and bibliographies of previous studies to identify eligible randomized trials. Two independent reviewers screened citations, extracted data, and assessed risks of bias. We performed random effects meta-analyses. We assessed quality of evidence using the GRADE approach. RESULTS We included 41 studies with 5,227 participants. No significant differences were observed between acupuncture and comparators (passive controls, sham acupuncture, treatment as usual, and active interventions) at post-intervention for relapse (SMD -0.12; 95%CI -0.46 to 0.22; 10 RCTs), frequency of substance use (SMD -0.27; -2.67 to 2.13; 2 RCTs), quantity of substance use (SMD 0.01; -0.40 to 0.43; 3 RCTs), and treatment dropout (OR 0.82; 0.63 to 1.09; 22 RCTs). We identified a significant difference in favor of acupuncture versus comparators for withdrawal/craving at post-intervention (SMD -0.57, -0.93 to -0.20; 20 RCTs), but we identified evidence of publication bias. We also identified a significant difference in favor of acupuncture versus comparators for anxiety at post-intervention (SMD -0.74, -1.15 to -0.33; 6 RCTs). Results for withdrawal/craving and anxiety symptoms were not significant at longer follow-up. Safety data (12 RCTs) suggests little risk of serious adverse events, though participants may experience slight bleeding or pain at needle insertion sites. CONCLUSIONS Available evidence suggests no consistent differences between acupuncture and comparators for substance use. Results in favor of acupuncture for withdrawal/craving and anxiety symptoms are limited by low quality bodies of evidence.


Journal of Trauma & Dissociation | 2018

Acupuncture for the Treatment of Adults with Posttraumatic Stress Disorder: A Systematic Review and Meta-Analysis.

Sean Grant; Benjamin Colaiaco; Aneesa Motala; Roberta Shanman; Melony E. Sorbero; Susanne Hempel

ABSTRACT Acupuncture has been suggested as a treatment for posttraumatic stress disorder (PTSD), yet its clinical effects are unclear. This review aims to estimate effects of acupuncture on PTSD symptoms, depressive symptoms, anxiety symptoms, and sleep quality for adults with PTSD. We searched 10 databases in January 2016 to identify eligible randomized controlled trials (RCTs). We performed random effects meta-analyses and examined quality of the body of evidence (QoE) using the GRADE approach to rate confidence in meta-analytic effect estimates. Seven RCTs with 709 participants met inclusion criteria. We identified very low QoE indicating significant differences favoring acupuncture (versus any comparator) at post-intervention on PTSD symptoms (standardized mean difference [SMD] = −0.80, 95% confidence interval [CI] [−1.59, −0.01], 6 RCTs), and low QoE at longer follow-up on PTSD (SMD = −0.46, 95% CI [−0.85, −0.06], 4 RCTs) and depressive symptoms (SMD = −0.56; 95% CI [−0.88, −0.23], 4 RCTs). No significant differences were observed between acupuncture and comparators at post-intervention for depressive symptoms (SMD = −0.58, 95% CI [−1.18, 0.01], 6 RCTs, very low QoE), anxiety symptoms (SMD = −0.82, 95% CI [−2.16, 0.53], 4 RCTs, very low QoE), and sleep quality (SMD = −0.46, 95% CI [−3.95, 3.03], 2 RCTs, low QoE). Safety data (7 RCTs) suggest little risk of serious adverse events, though some participants experienced minor/moderate pain, superficial bleeding, and hematoma at needle insertion sites. To increase confidence in findings, sufficiently powered replication trials are needed that measure all relevant clinical outcomes and dedicate study resources to minimizing participant attrition.


Annals of Internal Medicine | 2017

Machine Learning Versus Standard Techniques for Updating Searches for Systematic Reviews: A Diagnostic Accuracy Study

Paul G. Shekelle; Kanaka D Shetty; Sydne Newberry; Margaret Maglione; Aneesa Motala

Background: Systematic reviews are a cornerstone of evidence-based care and a necessary foundation for care recommendations to be labeled clinical practice guidelines. However, they become outdated relatively quickly and require substantial resources to maintain relevance. One particularly time-consuming task is updating the search to identify relevant articles published since the last search. We previously tested machine-learning approaches for making screening for updating more efficient by using 2 clinical topics as examples: medications to treat low bone density and off-label use of atypical antipsychotics. We tested 2 machine-learning algorithms: a generalized linear model with convex penalties (glmnet, R Foundation for Statistical Computing) and gradient boosting machines (1). Although initial results were encouraging, these methods required fully indexed PubMed citations. Objective: To report the preliminary results of our efforts to compare standard electronic search methods for updating with machine-learning methods that identify new evidence using only the title and abstract. Methods: We used citations from an original review to generate machine-learning estimators that assigned each citation from an updated search a probability for being relevant to 1 or more research questions. These estimators were constructed using a bag-of-words approach in which citations from the original search were first processed into a set of word frequencies that were used to model likelihood of relevance. The final estimators relied on the support vector machine algorithm (2). To evaluate the machine-learning method, we measured its sensitivity, positive predictive value, and overall accuracy for identifying articles that would have been included and excluded by the standard approach (in which the original search was replicated from the original reviews search end date to the present). Search results for the first topic, treatment of low bone density, were compared prospectively and in a blinded fashion; that is, experienced reviewers independently and concurrently reviewed citations retrieved from the entire standard search and from machine learning. Search results for the 2 other topics, treatment of gout and of osteoarthritis of the knee, were compared retrospectively in that the standard method was used for the update and the machine-learning method was used shortly thereafter to identify included articles. In all cases, we developed models and selected the final algorithm (support vector machine) using only the original search results. We then applied the final models to citations from the updated searches and calculated sensitivity, positive predictive value, and overall accuracy. Results: For all 3 topics, the number of titles requiring human screening decreased dramatically by 67% to 83% (Table). For 2 topics, the machine-learning approach missed 1 title included as evidence in the update report (sensitivity of 97% and 91%, respectively). For the third topic, the machine-learning approach identified all titles included in the update report. Table. Machine Learning Versus Standard Updating Methods: Efficiency and Sensitivity Both titles missed by machine learning were peripheral to the evidence base. In the searches for treatment of low bone density, the missed article was a study about the potential overuse of dual-energy x-ray absorptiometry that was cited as peripheral evidence for the key question about this type of monitoring (3). For the searches related to gout treatment, the missed article was a research letter about the association between HLA-B*5801 and severe skin reactions with use of allopurinol (4), a subject that the original report already identified and extensively referenced. We believe that the conclusions and strength of evidence would not have changed in either report had these articles not been included. Discussion: Machine learning shows promise for decreasing the effort involved in updating searches for systematic reviews. Our use of this method maintained an average sensitivity of 96% across 3 topics while reducing the number of titles to be screened by an average of 78% (compared with reductions between 30% and 70% noted previously) (5). We determined that the only 2 titles missed in these 3 applications were inconsequential to the conclusions or strength of evidence. Compared with previous approaches, ours has the disadvantage of requiring a preexisting systematic review. However, whereas previous studies have also used bag-of-words approaches to help screen citations for systematic reviews (using terms from each citations title, abstract, and key words [if available]), our method uses several features helpful for updating. First, we modeled all citations included in (true positives) or excluded from (true negatives) the original review. Second, we modeled inclusion in the final report directly as opposed to attempting to model citations that passed an earlier stage of screening; this method is thus useful with relatively limited data, namely, only the original search and evidence tables from such reviews. Finally, this approach permitted manual tuning to prioritize retrieval of citations relevant to particular research questions (for example, those related to efficacy rather than to patient adherence). Additional improvements in machine learning will probably further increase sensitivity and specificity.


Implementation Science | 2015

The Minimum Quality Criteria Set (QI-MQCS) for critical appraisal: advancing the science of quality improvement

Lisa V. Rubenstein; Susanne Hempel; Jodi L Liu; Margie J Danz; Robbie Foy; Yee-Wei Lim; Aneesa Motala; Paul G. Shekelle

Objective Effective learning across related scientific investigations through evidence synthesis is critical to promoting evidence-based approaches to healthcare. Synthesis of findings from quality improvement intervention (QII) publications, however, poses challenges. We aimed to develop a critical appraisal instrument (the Minimum Quality Criteria Set or QI-MQCS) to promote identification, dissemination and implementation of findings from high quality QII evaluations.

Collaboration


Dive into the Aneesa Motala's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul G Shekelle

VA Palo Alto Healthcare System

View shared research outputs
Top Co-Authors

Avatar

Sydne J Newberry

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martha Timmer

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge