Mark D Grant
Blue Cross Blue Shield Association
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark D Grant.
Journal of Clinical Epidemiology | 2011
Rongwei Fu; Gerald Gartlehner; Mark D Grant; Tatyana Shamliyan; Art Sedrakyan; Timothy J Wilt; Lauren Griffith; Mark Oremus; Parminder Raina; Afisi Ismaila; Pasqualina Santaguida; Joseph Lau; Thomas A Trikalinos
OBJECTIVE This article is to establish recommendations for conducting quantitative synthesis, or meta-analysis, using study-level data in comparative effectiveness reviews (CERs) for the Evidence-based Practice Center (EPC) program of the Agency for Healthcare Research and Quality. STUDY DESIGN AND SETTING We focused on recurrent issues in the EPC program and the recommendations were developed using group discussion and consensus based on current knowledge in the literature. RESULTS We first discussed considerations for deciding whether to combine studies, followed by discussions on indirect comparison and incorporation of indirect evidence. Then, we described our recommendations on choosing effect measures and statistical models, giving special attention to combining studies with rare events; and on testing and exploring heterogeneity. Finally, we briefly presented recommendations on combining studies of mixed design and on sensitivity analysis. CONCLUSION Quantitative synthesis should be conducted in a transparent and consistent way. Inclusion of multiple alternative interventions in CERs increases the complexity of quantitative synthesis, whereas the basic issues in quantitative synthesis remain crucial considerations in quantitative synthesis for a CER. We will cover more issues in future versions and update and improve recommendations with the accumulation of new research to advance the goal for transparency and consistency.
Journal of Clinical Epidemiology | 2011
Tatyana Shamliyan; Robert L. Kane; Mohammed T. Ansari; Gowri Raman; Nancy D Berkman; Mark D Grant; Gail Janes; Margaret Maglione; David Moher; Mona Nasser; Karen A. Robinson; Jodi B. Segal; Sophia Tsouros
OBJECTIVE To develop two checklists for the quality of observational studies of incidence or risk factors of diseases. STUDY DESIGN AND SETTING Initial development of the checklists was based on a systematic literature review. The checklists were refined after pilot trials of validity and reliability were conducted by seven experts, who tested the checklists on 10 articles. RESULTS The checklist for studies of incidence or prevalence of chronic disease had six criteria for external validity and five for internal validity. The checklist for risk factor studies had six criteria for external validity, 13 criteria for internal validity, and two aspects of causality. A Microsoft Access database produced automated standardized reports about external and internal validities. Pilot testing demonstrated face and content validities and discrimination of reporting vs. methodological qualities. Interrater agreement was poor. The experts suggested future reliability testing of the checklists in systematic reviews with preplanned protocols, a priori consensus about research-specific quality criteria, and training of the reviewers. CONCLUSION We propose transparent and standardized quality assessment criteria of observational studies using the developed checklists. Future testing of the checklists in systematic reviews is necessary to develop reliable tools that can be used with confidence.
Value in Health | 2014
Naomi Aronson; Mark D Grant
Comparative effectiveness research incorporates study designs extending beyond the randomized controlled trial (RCT) [1], which is the touchstone for demonstrating therapeutic efficacy. The litany of RCT flaws is often recited: not real world, not real patients, not real settings, not available, not timely, and not affordable. Although not every RCT is well designed, there are established narratives and tools to aid health care decision makers in appraising and interpreting them. The Comparative Effectiveness Research Collaborative Initiative is a collective effort among the Academy of Managed Care Pharmacy, the National Pharmaceutical Council, and the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) to provide tools for decision makers to assess studies that use nonexperimental methods important to comparative effectiveness research. To this end, ISPOR Good Practices Task Forces have developed tools to assess 1) prospective and retrospective observational studies, 2) modeling studies, and 3) network meta-analysis studies.
Archive | 2013
Mark D Grant; Margaret Piper; Julia Bohlius; Thomy Tonia; Nadège Robert; Claudia J Bonnell; Kathleen M Ziegler; Naomi Aronson
Archive | 2013
Steven Gutman; Margaret Piper; Mark D Grant; Ethan Basch; Denise M Oliansky; Naomi Aronson
American Journal of Public Health Research | 2013
Tatyana Shamliyan; Mohammed T. Ansari; Gowri Raman; Nancy D Berkman; Mark D Grant; Gail Janes; Margaret Maglione; David Moher; Mona Nasser; Karen A. Robinson; Jodi B. Segal; Sophia Tsouros
Archive | 2013
Mark D Grant; Margaret Piper; Julia Bohlius; Thomy Tonia; Nadège Robert; Claudia J Bonnell; Kathleen M Ziegler; Naomi Aronson
Archive | 2015
Mark D Grant; Anne Marbella; Amy T Wang; Elizabeth Pines; Jessica Hoag; Claudia J Bonnell; Kathleen M Ziegler; Naomi Aronson
Archive | 2015
Mark D Grant; Anne Marbella; Amy T Wang; Elizabeth Pines; Jessica Hoag; Claudia J Bonnell; Kathleen M Ziegler; Naomi Aronson
Archive | 2015
Mark D Grant; Anne Marbella; Amy T Wang; Elizabeth Pines; Jessica Hoag; Claudia J Bonnell; Kathleen M Ziegler; Naomi Aronson