Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephanie Chang is active.

Publication


Featured researches published by Stephanie Chang.


Journal of Clinical Epidemiology | 2010

AHRQ Series Paper 5: Grading the strength of a body of evidence when comparing medical interventions—Agency for Healthcare Research and Quality and the Effective Health-Care Program

Douglas K Owens; Kathleen N. Lohr; David Atkins; Jonathan R Treadwell; James Reston; Eric B Bass; Stephanie Chang; Mark Helfand

OBJECTIVE To establish guidance on grading strength of evidence for the Evidence-based Practice Center (EPC) program of the U.S. Agency for Healthcare Research and Quality. STUDY DESIGN AND SETTING Authors reviewed authoritative systems for grading strength of evidence, identified domains and methods that should be considered when grading bodies of evidence in systematic reviews, considered public comments on an earlier draft, and discussed the approach with representatives of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group. RESULTS The EPC approach is conceptually similar to the GRADE system of evidence rating; it requires assessment of four domains: risk of bias, consistency, directness, and precision. Additional domains to be used when appropriate include dose-response association, presence of confounders that would diminish an observed effect, strength of association, and publication bias. Strength of evidence receives a single grade: high, moderate, low, or insufficient. We give definitions, examples, mechanisms for scoring domains, and an approach for assigning strength of evidence. CONCLUSION EPCs should grade strength of evidence separately for each major outcome and, for comparative effectiveness reviews, all major comparisons. We will collaborate with the GRADE group to address ongoing challenges in assessing the strength of evidence.


Annals of Internal Medicine | 2006

The Efficacy and Safety of Multivitamin and Mineral Supplement Use To Prevent Cancer and Chronic Disease in Adults: A Systematic Review for a National Institutes of Health State-of-the-Science Conference

Han Yao Huang; Benjamin Caballero; Stephanie Chang; Anthony J. Alberg; Richard D. Semba; Christine Schneyer; Renee F Wilson; Ting Yuan Cheng; Jason L. Vassy; Gregory Prokopowicz; George J. Barnes; Eric B Bass

Multivitamin and mineral supplements are the most commonly used dietary supplements in the United States (1). According to the National Health and Nutrition Examination Survey 19992000, 35% of adults reported recent use of multivitamin supplements (1). Most persons use multivitamin and mineral supplements to ensure adequate intake and to prevent or mitigate diseases. The commonly used over-the-counter multivitamin and mineral supplements contain at least 10 vitamins and 10 minerals. Many chronic diseases share common risk factors, including cigarette smoking, unhealthy diet, sedentary lifestyle, and obesity. Important underlying mechanisms for these factors to increase risk for disease include oxidative damage, inflammation, and 1-carbon metabolism (27). Numerous in vitro studies and animal studies have suggested favorable effects of several vitamins and minerals on these processes and on angiogenesis, immunity, cell differentiation, proliferation, and apoptosis (810). The U.S. Food and Nutrition Board has established tolerable upper intake levels for several nutrients. An upper intake level is defined as the highest level of daily nutrient intake that is likely to pose no risk for adverse effects to almost all persons in the general population (11). The strength of the evidence used to determine an upper intake level depends on data availability. Hence, an update of the data on adverse effects will help researchers to evaluate the appropriateness of upper intake levels. We performed a systematic review to synthesize the published literature on 1) the efficacy of multivitamin and mineral supplements and certain commonly used single vitamin or mineral supplements in the primary prevention of cancer and chronic disease in the general adult population and 2) the safety of multivitamin and mineral supplements and certain commonly used single vitamin or mineral supplements in the general population of adults and children (12). The review was done for a National Institutes of Health State-of-the-Science Statement for health care providers and the general public. This report is from the systematic review and focuses on 2 questions: What is the efficacy determined in randomized, controlled trials of multivitamin and mineral supplements (each at a dose less than the upper intake level) in the general adult population for the primary prevention of cancer and chronic diseases or conditions, and what is known about the safety of multivitamin and mineral supplement use in the general population of adults and children, on the basis of data from randomized, controlled trials and observational studies? Methods We defined multivitamin and mineral supplements as any supplements that contain 3 or more vitamins or minerals without herbs, hormones, or drugs. We defined the general population as community-dwelling persons who do not have special nutritional needs. (Examples of persons with special nutritional needs are those who are institutionalized, hospitalized, pregnant, or clinically deficient in nutrients.) A disease or condition was defined as chronic if it persists over an extended period, is not easily resolved, often cannot be cured by medication (although symptoms may be controlled or ameliorated with medication), frequently worsens over time, causes disability or impairment, and often requires ongoing medical care (13). The following chronic diseases were considered: breast cancer, colorectal cancer, lung cancer, prostate cancer, gastric cancer, or any other cancer (including colorectal polyps); myocardial infarction, stroke, hypertension, or other cardiovascular diseases; type 2 diabetes mellitus; Parkinson disease, cognitive decline, memory loss, or dementia; cataracts, macular degeneration, or hearing loss; osteoporosis, osteopenia, rheumatoid arthritis, or osteoarthritis; nonalcoholic steatohepatitis; chronic renal insufficiency or chronic nephrolithiasis; HIV infection, hepatitis C, or tuberculosis; and chronic obstructive pulmonary disease. We focused on primary prevention trials in adults because primary prevention is the main purpose of multivitamin supplement use in the general adult population (14). Primary prevention was defined as an action taken to prevent the development of a disease in persons who are well and do not have the disease in question (15). Using this definition, we included studies for prevention of chronic disease (for example, cardiovascular disease) in persons with risk factors (for example, type 2 diabetes mellitus or hypertension) for that disease. We also included studies for prevention of malignant disorders (such as colon cancer) in persons with selected precursors of disease (such as polyps). We did not include studies in persons with carcinoma in situ or similar malignant conditions. Literature Sources We searched the MEDLINE, EMBASE, and Cochrane databases, including Cochrane Reviews and the Cochrane Central Register of Controlled Trials, for articles published from 1966 through February 2006. Additional articles were identified by searching references in pertinent articles, querying experts, and hand-searching the tables of content of 15 relevant journals published from January 2005 through February 2006. Search Terms and Strategies We developed a core strategy for searching MEDLINE, accessed through PubMed, that was based on analysis of the Medical Subject Heading terms and text words of key articles identified a priori. This strategy formed the basis for the strategies developed for the other databases (see the complete evidence report for additional details) (12). Inclusion and Exclusion Criteria We focused on trials that ascertained clinical end points. Biomarker data were considered if data were presented in a way that permitted ascertainment of incident cases of chronic disease. Because users of multivitamin supplements were more likely than nonusers to be women, to be older, to have higher levels of education, to have a healthier lifestyle (more physical activities, more fruit and vegetable intake, and less likely to be smokers), and to more frequently use nonsteroidal anti-inflammatory drugs (1, 16), residual confounding would limit the internal validity of observational studies. Hence, for assessment of efficacy, we focused on data from randomized, controlled trials as the strongest source of evidence. However, for assessment of safety, we included data from randomized, controlled trials and observational studies in adults and children to minimize the risk for missing any potential safety concerns. An article was excluded if it was not written in English; presented no data in humans; included only pregnant women, infants, persons 18 years of age or younger (except if a study of persons 18 years of age presented data on the safety of multivitamin and mineral supplements), patients with chronic disease, patients receiving treatment for chronic disease, or persons living in long-term care facilities; studied only nutritional deficiency; did not address the use of supplements; did not address the use of supplements separately from dietary intake; did not cover any pertinent diseases; or was an editorial, commentary, or letter. Each article underwent title review, abstract review, and assessment of inclusion or exclusion by paired reviewers. Differences in opinion were resolved through consensus adjudication. Article review, organization, and tracking were performed by using Web-based SRS, version 3.0 (TrialStat! Corp., Ottawa, Ontario, Canada). Assessment of Study Quality Each eligible article was reviewed by paired reviewers who independently rated its quality according to 5 domains: the description of how study participants were representative of the source population (4 items), bias and confounding (12 items), descriptions of study supplements and supplementation (1 item), adherence to treatment and follow-up (7 items), and statistical analysis (6 items). Reviewers assigned a score of 0 (criterion not met), 1 (criterion partially met), or 2 (criterion fully met) to each item. The score for each quality domain was the proportion of the maximum score available in each domain. The overall quality score of a study was the average of the 5 scores for the 5 domains. The quality of each study in each domain was classified as good (score 80%), fair (score of 50% to 79%), or poor (score < 50%). For data on adverse effects, causality was evaluated with respect to temporal relationship, lack of alternative causes, doseresponse relationship, evidence of increased circulating levels of the nutrient under investigation, disappearance of adverse effects after cessation of supplement use, and response to rechallenge. Data Extraction Paired reviewers abstracted data on study design, participant characteristics, study supplements, and results. Data abstraction forms were completed by a primary reviewer and were verified for completeness and accuracy by a second reviewer. Evidence Grading We graded the quantity, quality, and consistency of the evidence on efficacy by adapting an evidence grading scheme recommended by the Grading of Recommendations Assessment, Development and Evaluation Working Group (17). The strength of evidence was classified into 1 of 4 categories: high (further research is very unlikely to change our confidence in the estimates of effects), moderate (further research is likely to greatly affect our confidence in the estimates of effects and may change the estimates), low (further research is very likely to greatly affect confidence in the estimates of effects and is likely to change the estimates), or very low (any estimate of effect is very uncertain). Role of the Funding Source This article is based on research conducted at the Johns Hopkins Evidence-based Practice Center under contract to the Agency for Healthcare Research and Quality (contract no. 290-02-0018), Rockville, Maryland, in response to a task order requested by the National Institutes of Health Office of M


Journal of Clinical Epidemiology | 2013

The GRADE approach is reproducible in assessing the quality of evidence of quantitative evidence syntheses

Reem A. Mustafa; Nancy Santesso; Jan Brozek; Elie A. Akl; Stephen D. Walter; Geoff Norman; Mahan Kulasegaram; Robin Christensen; Gordon H. Guyatt; Yngve Falck-Ytter; Stephanie Chang; Mohammad Hassan Murad; Gunn Elisabeth Vist; Toby J Lasserson; Gerald Gartlehner; Vijay K. Shukla; Xin Sun; Craig Whittington; Piet N. Post; Eddy Lang; Kylie J Thaler; Ilkka Kunnamo; Heidi Alenius; Joerg J. Meerpohl; Ana C. Alba; Immaculate Nevis; Stephen J. Gentles; Marie Chantal Ethier; Alonso Carrasco-Labra; Rasha Khatib

OBJECTIVE We evaluated the inter-rater reliability (IRR) of assessing the quality of evidence (QoE) using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach. STUDY DESIGN AND SETTING On completing two training exercises, participants worked independently as individual raters to assess the QoE of 16 outcomes. After recording their initial impression using a global rating, raters graded the QoE following the GRADE approach. Subsequently, randomly paired raters submitted a consensus rating. RESULTS The IRR without using the GRADE approach for two individual raters was 0.31 (95% confidence interval [95% CI] = 0.21-0.42) among Health Research Methodology students (n = 10) and 0.27 (95% CI = 0.19-0.37) among the GRADE working group members (n = 15). The corresponding IRR of the GRADE approach in assessing the QoE was significantly higher, that is, 0.66 (95% CI = 0.56-0.75) and 0.72 (95% CI = 0.61-0.79), respectively. The IRR further increased for three (0.80 [95% CI = 0.73-0.86] and 0.74 [95% CI = 0.65-0.81]) or four raters (0.84 [95% CI = 0.78-0.89] and 0.79 [95% CI = 0.71-0.85]). The IRR did not improve when QoE was assessed through a consensus rating. CONCLUSION Our findings suggest that trained individuals using the GRADE approach improves reliability in comparison to intuitive judgments about the QoE and that two individual raters can reliably assess the QoE using the GRADE system.


Evidence-based Medicine | 2008

GRADE: assessing the quality of evidence for diagnostic recommendations

Holger J. Schünemann; Andrew D Oxman; Jan Brozek; Paul Glasziou; Patrick M. Bossuyt; Stephanie Chang; Paola Muti; Roman Jaeschke; Gordon H. Guyatt

Making a diagnosis is the bread and butter of clinical practice, but in today’s world of many tests, the process has become complex. Guidelines for making an evidence-based diagnosis abound, but those making recommendations about diagnostic tests or test strategies must realise that clinicians require support to make diagnostic decisions that they can easily implement in daily practice. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group has developed a rigorous, transparent, and increasingly adopted approach for grading the quality of research evidence and strength of recommendations to guide clinical practice. This Notebook summarises GRADE’s process for developing recommendations for tests.1 Clinicians are trained to use tests for screening and diagnosis, identifying physiological derangements, establishing a prognosis, and monitoring illness and treatment response by assessing signs and symptoms, imaging, biochemistry, pathology, and psychological testing techniques.2 Sensitivity, specificity, positive predictive value, likelihood ratios, and diagnostic odds ratios are among the challenging terms that diagnostic studies typically deliver to clinicians, and all have to do with diagnostic accuracy. Not only do clinicians have difficulties remembering the definitions and calculations for these terms, these concepts are often complex to apply to individual patients. Many clinicians order a test despite uncertainty about how to interpret the result, and they also contribute to testing errors …


BMJ | 2016

When and how to update systematic reviews: consensus and checklist.

Paul Garner; Sally Hopewell; Jackie Chandler; Harriet MacLehose; H. J. Schünemann; Elie A. Akl; Joseph Beyene; Stephanie Chang; Rachel Churchill; K Dearness; G Guyatt; C Lefebvre; B Liles; Rachel Marshall; L Martínez García; Chris Mavergames; Mona Nasser; Amir Qaseem; Margaret Sampson; Karla Soares-Weiser; Yemisi Takwoingi; Lehana Thabane; Marialena Trivella; Peter Tugwell; Emma J Welsh; E Wilson

Updating of systematic reviews is generally more efficient than starting all over again when new evidence emerges, but to date there has been no clear guidance on how to do this. This guidance helps authors of systematic reviews, commissioners, and editors decide when to update a systematic review, and then how to go about updating the review.


Journal of Clinical Epidemiology | 2010

AHRQ Series Paper 1: Comparing medical interventions: AHRQ and the Effective Health-Care Program

David Atkins; Stephanie Chang; Beth A. Collins Sharp

In 2005, the Agency for Healthcare Research and Quality established the Effective Health Care (EHC) Program. The EHC Program aims to provide understandable and actionable information for patients, clinicians, and policy makers. The Evidence-based Practice Centers are one of the cornerstones of the EHC Program. Three key elements guide the EHC Program and thus, the conduct of Comparative Effectiveness Reviews by the EPC Program. Comparative Effectiveness Reviews introduce several specific challenges in addition to the familiar issues raised in a systematic review or meta-analysis of a single intervention. The articles in this series together form the current Methods Guide for Comparative Effectiveness Reviews of the EHC Program.


BMC Medical Research Methodology | 2013

Consensus-based recommendations for investigating clinical heterogeneity in systematic reviews

Joel Gagnier; Hal Morgenstern; Doug Altman; Jesse A. Berlin; Stephanie Chang; Peter McCulloch; Xin Sun; David Moher

BackgroundCritics of systematic reviews have argued that these studies often fail to inform clinical decision making because their results are far too general, that the data are sparse, such that findings cannot be applied to individual patients or for other decision making. While there is some consensus on methods for investigating statistical and methodological heterogeneity, little attention has been paid to clinical aspects of heterogeneity. Clinical heterogeneity, true effect heterogeneity, can be defined as variability among studies in the participants, the types or timing of outcome measurements, and the intervention characteristics. The objective of this project was to develop recommendations for investigating clinical heterogeneity in systematic reviews.MethodsWe used a modified Delphi technique with three phases: (1) pre-meeting item generation; (2) face-to-face consensus meeting in the form of a modified Delphi process; and (3) post-meeting feedback. We identified and invited potential participants with expertise in systematic review methodology, systematic review reporting, or statistical aspects of meta-analyses, or those who published papers on clinical heterogeneity.ResultsBetween April and June of 2011, we conducted phone calls with participants. In June 2011 we held the face-to-face focus group meeting in Ann Arbor, Michigan. First, we agreed upon a definition of clinical heterogeneity: Variations in the treatment effect that are due to differences in clinically related characteristics. Next, we discussed and generated recommendations in the following 12 categories related to investigating clinical heterogeneity: the systematic review team, planning investigations, rationale for choice of variables, types of clinical variables, the role of statistical heterogeneity, the use of plotting and visual aids, dealing with outlier studies, the number of investigations or variables, the role of the best evidence synthesis, types of statistical methods, the interpretation of findings, and reporting.ConclusionsClinical heterogeneity is common in systematic reviews. Our recommendations can help guide systematic reviewers in conducting valid and reliable investigations of clinical heterogeneity. Findings of these investigations may allow for increased applicability of findings of systematic reviews to the management of individual patients.


European Journal of Clinical Nutrition | 2010

Supplementing iron and zinc: double blind, randomized evaluation of separate or combined delivery

Stephanie Chang; S El Arifeen; Sanwarul Bari; M. A. Wahed; Kazi Mizanur Rahman; Mahfuzar Rahman; Abdullah Al Mahmud; Nazma Begum; K. Zaman; Abdullah H. Baqui; Robert E. Black

Background/Objectives:Many children have diets deficient in both iron and zinc, but there has been some evidence of negative interactions when they are supplemented together. The optimal delivery approach would maximize clinical benefits of both nutrients. We studied the effectiveness of different iron and zinc supplement delivery approaches to improve diarrhea and anemia in a rural Bangladesh population.Study Design:Randomized, double blind, placebo-controlled factorial community trial.Results:Iron supplementation alone increased diarrhea, but adding zinc, separately or together, attenuated these harmful effects. Combined zinc and iron was as effective as iron alone for iron outcomes. All supplements were vomited <1% of the time, but combined iron and zinc were vomited significantly more than any of the other supplements. Children receiving zinc and iron (together or separately) had fewer hospitalizations. Separating delivery of iron and zinc may have some additional benefit in stunted children.Conclusions:Separate and combined administration of iron and zinc are equally effective for reducing diarrhea, hospitalizations and improving iron outcomes. There may be some benefit in separate administration in stunted children.


Systematic Reviews | 2014

Observational evidence and strength of evidence domains: case examples

Maya O’Neil; Nancy D Berkman; Lisa Hartling; Stephanie Chang; Johanna Anderson; Makalapua Motu’apuaka; Jeanne-Marie Guise; Marian McDonagh

BackgroundSystematic reviews of healthcare interventions most often focus on randomized controlled trials (RCTs). However, certain circumstances warrant consideration of observational evidence, and such studies are increasingly being included as evidence in systematic reviews.MethodsTo illustrate the use of observational evidence, we present case examples of systematic reviews in which observational evidence was considered as well as case examples of individual observational studies, and how they demonstrate various strength of evidence domains in accordance with current Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) methods guidance.ResultsIn the presented examples, observational evidence is used when RCTs are infeasible or raise ethical concerns, lack generalizability, or provide insufficient data. Individual study case examples highlight how observational evidence may fulfill required strength of evidence domains, such as study limitations (reduced risk of selection, detection, performance, and attrition); directness; consistency; precision; and reporting bias (publication, selective outcome reporting, and selective analysis reporting), as well as additional domains of dose-response association, plausible confounding that would decrease the observed effect, and strength of association (magnitude of effect).ConclusionsThe cases highlighted in this paper demonstrate how observational studies may provide moderate to (rarely) high strength evidence in systematic reviews.


Archives of Disease in Childhood | 2011

Validation of a clinical algorithm to identify neonates with severe illness during routine household visits in rural Bangladesh

Gary L. Darmstadt; Abdullah H. Baqui; Yoonjoung Choi; Sanwarul Bari; Syed Moshfiqur Rahman; Ishtiaq Mannan; A. S. M. Nawshad Uddin Ahmed; Samir K. Saha; Habibur Rahman Seraji; Radwanur Rahman; Peter J. Winch; Stephanie Chang; Nazma Begum; Robert E. Black; Mathuram Santosham; Shams El Arifeen

Background To validate a clinical algorithm for community health workers (CHWs) during routine household surveillance for neonatal illness in rural Bangladesh. Methods Surveillance was conducted in the intervention arm of a trial of newborn interventions. CHWs assessed 7587 neonates on postnatal days 0, 2, 5 and 8 and identified neonates with very severe disease (VSD) using an 11-sign algorithm. A nested prospective study was conducted to validate the algorithm (n=395). Physicians evaluated neonates to determine whether newborns with VSD needed referral. The authors calculated algorithm sensitivity and specificity in identifying (1) neonates needing referral and (2) mortality during the first 10 days of life. Results The 11-sign algorithm had sensitivity of 50.0% (95% CI 24.7% to 75.3%) and specificity of 98.4% (96.6% to 99.4%) for identifying neonates needing referral-level care. A simplified 6-sign algorithm had sensitivity of 81.3% (54.4% to 96.0%) and specificity of 96.0% (93.6% to 97.8%) for identifying referral need and sensitivity of 58.0% (45.5% to 69.8%) and specificity of 93.2% (92.5% to 93.7%) for screening mortality. Compared to our 6-sign algorithm, the Young Infant Study 7-sign (YIS7) algorithm with minor modifications had similar sensitivity and specificity. Conclusion Community-based surveillance for neonatal illness by CHWs using a simple 6-sign clinical algorithm is a promising strategy to effectively identify neonates at risk of mortality and needing referral to hospital. The YIS7 algorithm was also validated with high sensitivity and specificity at community level, and is recommended for routine household surveillance for newborn illness. ClinicalTrials.gov no. NCT00198627.

Collaboration


Dive into the Stephanie Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric B Bass

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Matchar

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Mohammed T Ansari

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mary Butler

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Timothy S Carey

United States Department of Health and Human Services

View shared research outputs
Researchain Logo
Decentralizing Knowledge