Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan L. Norris is active.

Publication


Featured researches published by Susan L. Norris.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 1. Introduction—GRADE evidence profiles and summary of findings tables

Gordon H. Guyatt; Andrew D Oxman; Elie A. Akl; Regina Kunz; Gunn Elisabeth Vist; Jan Brozek; Susan L. Norris; Yngve Falck-Ytter; Paul Glasziou; Hans deBeer; Roman Jaeschke; David Rind; Joerg J. Meerpohl; Philipp Dahm; Holger J. Schünemann

This article is the first of a series providing guidance for use of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system of rating quality of evidence and grading strength of recommendations in systematic reviews, health technology assessments (HTAs), and clinical practice guidelines addressing alternative management options. The GRADE process begins with asking an explicit question, including specification of all important outcomes. After the evidence is collected and summarized, GRADE provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect. Recommendations are characterized as strong or weak (alternative terms conditional or discretionary) according to the quality of the supporting evidence and the balance between desirable and undesirable consequences of the alternative management options. GRADE suggests summarizing evidence in succinct, transparent, and informative summary of findings tables that show the quality of evidence and the magnitude of relative and absolute effects for each important outcome and/or as evidence profiles that provide, in addition, detailed information about the reason for the quality of evidence rating. Subsequent articles in this series will address GRADEs approach to formulating questions, assessing quality of evidence, and developing recommendations.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 9. Rating up the quality of evidence.

Gordon H. Guyatt; Andrew D Oxman; Shahnaz Sultan; Paul Glasziou; Elie A. Akl; Pablo Alonso-Coello; David Atkins; Regina Kunz; Jan Brozek; Victor M. Montori; Roman Jaeschke; David Rind; Philipp Dahm; Joerg J. Meerpohl; Gunn Elisabeth Vist; Elise Berliner; Susan L. Norris; Yngve Falck-Ytter; M. Hassan Murad; Holger J. Schünemann

The most common reason for rating up the quality of evidence is a large effect. GRADE suggests considering rating up quality of evidence one level when methodologically rigorous observational studies show at least a two-fold reduction or increase in risk, and rating up two levels for at least a five-fold reduction or increase in risk. Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect. Other considerations include the rapidity of the response, the underlying trajectory of the condition, and indirect evidence.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 4. Rating the quality of evidence—study limitations (risk of bias)

Gordon H. Guyatt; Andrew D Oxman; Gunn Elisabeth Vist; Regina Kunz; Jan Brozek; Pablo Alonso-Coello; Victor M. Montori; Elie A. Akl; Ben Djulbegovic; Yngve Falck-Ytter; Susan L. Norris; John W Williams; David Atkins; Joerg J. Meerpohl; Holger J. Schünemann

In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if most of the relevant evidence comes from studies that suffer from a high risk of bias. Well-established limitations of randomized trials include failure to conceal allocation, failure to blind, loss to follow-up, and failure to appropriately consider the intention-to-treat principle. More recently recognized limitations include stopping early for apparent benefit and selective reporting of outcomes according to the results. Key limitations of observational studies include use of inappropriate controls and failure to adequately adjust for prognostic imbalance. Risk of bias may vary across outcomes (e.g., loss to follow-up may be far less for all-cause mortality than for quality of life), a consideration that many systematic reviews ignore. In deciding whether to rate down for risk of bias--whether for randomized trials or observational studies--authors should not take an approach that averages across studies. Rather, for any individual outcome, when there are some studies with a high risk, and some with a low risk of bias, they should consider including only the studies with a lower risk of bias.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 7. Rating the quality of evidence--inconsistency

Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; James Woodcock; Jan Brozek; Mark Helfand; Pablo Alonso-Coello; Paul Glasziou; Roman Jaeschke; Elie A. Akl; Susan L. Norris; Gunn Elisabeth Vist; Philipp Dahm; Vijay K. Shukla; Julian P. T. Higgins; Yngve Falck-Ytter; Holger J. Schünemann

This article deals with inconsistency of relative (rather than absolute) treatment effects in binary/dichotomous outcomes. A body of evidence is not rated up in quality if studies yield consistent results, but may be rated down in quality if inconsistent. Criteria for evaluating consistency include similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity and I(2). To explore heterogeneity, systematic review authors should generate and test a small number of a priori hypotheses related to patients, interventions, outcomes, and methodology. When inconsistency is large and unexplained, rating down quality for inconsistency is appropriate, particularly if some studies suggest substantial benefit, and others no effect or harm (rather than only large vs. small effects). Apparent subgroup effects may be spurious. Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction; subgroup comparisons come from within rather than between studies; tests of interaction generate low P-values; and have a biological rationale.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 8. Rating the quality of evidence-Indirectness

Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; James Woodcock; Jan Brozek; Mark Helfand; Pablo Alonso-Coello; Yngve Falck-Ytter; Roman Jaeschke; Gunn Elisabeth Vist; Elie A. Akl; Piet N. Post; Susan L. Norris; Joerg J. Meerpohl; Vijay K. Shukla; Mona Nasser; Holger J. Schünemann

Direct evidence comes from research that directly compares the interventions in which we are interested when applied to the populations in which we are interested and measures outcomes important to patients. Evidence can be indirect in one of four ways. First, patients may differ from those of interest (the term applicability is often used for this form of indirectness). Secondly, the intervention tested may differ from the intervention of interest. Decisions regarding indirectness of patients and interventions depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. Thirdly, outcomes may differ from those of primary interest-for instance, surrogate outcomes that are not themselves important, but measured in the presumption that changes in the surrogate reflect changes in an outcome important to patients. A fourth type of indirectness, conceptually different from the first three, occurs when clinicians must choose between interventions that have not been tested in head-to-head comparisons. Making comparisons between treatments under these circumstances requires specific statistical methods and will be rated down in quality one or two levels depending on the extent of differences between the patient populations, co-interventions, measurements of the outcome, and the methods of the trials of the candidate interventions.


Journal of Clinical Epidemiology | 2011

GRADE guidelines: 5. Rating the quality of evidence—publication bias

Gordon H. Guyatt; Andrew D Oxman; Victor M. Montori; Gunn Elisabeth Vist; Regina Kunz; Jan Brozek; Pablo Alonso-Coello; Ben Djulbegovic; David Atkins; Yngve Falck-Ytter; John W Williams; Joerg J. Meerpohl; Susan L. Norris; Elie A. Akl; Holger J. Schünemann

In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias. Even when individual studies included in best-evidence summaries have a low risk of bias, publication bias can result in substantial overestimates of effect. Authors should suspect publication bias when available evidence comes from a number of small studies, most of which have been commercially funded. A number of approaches based on examination of the pattern of data are available to help assess publication bias. The most popular of these is the funnel plot; all, however, have substantial limitations. Publication bias is likely frequent, and caution in the face of early results, particularly with small sample size and number of events, is warranted.


American Journal of Preventive Medicine | 2002

The effectiveness of disease and case management for people with diabetes: A systematic review

Susan L. Norris; Phyllis Nichols; Carl J. Caspersen; Russell E. Glasgow; Michael M. Engelgau; Leonard Jack; George Isham; Susan Snyder; Vilma G Carande-Kulis; Sanford Garfield; Peter A. Briss; David K. McCulloch

This report presents the results of a systematic review of the effectiveness and economic efficiency of disease management and case management for people with diabetes and forms the basis for recommendations by the Task Force on Community Preventive Services on the use of these two interventions. Evidence supports the effectiveness of disease management on glycemic control; on screening for diabetic retinopathy, foot lesions and peripheral neuropathy, and proteinuria; and on the monitoring of lipid concentrations. This evidence is applicable to adults with diabetes in managed care organizations and community clinics in the United States and Europe. Case management is effective in improving both glycemic control and provider monitoring of glycemic control. This evidence is applicable primarily in the U.S. managed care setting for adults with type 2 diabetes. Case management is effective both when delivered in conjunction with disease management and when delivered with one or more additional educational, reminder, or support interventions.


Journal of Clinical Epidemiology | 2013

GRADE guidelines: 15. Going from evidence to recommendation-determinants of a recommendation's direction and strength.

Jeffrey C Andrews; Holger J. Schünemann; Andrew D Oxman; Kevin Pottie; Joerg J. Meerpohl; Pablo Alonso Coello; David Rind; Victor M. Montori; Juan P. Brito; Susan L. Norris; Mahmoud Elbarbary; Piet N. Post; Mona Nasser; Vijay K. Shukla; Roman Jaeschke; Jan Brozek; Ben Djulbegovic; Gordon H. Guyatt

In the GRADE approach, the strength of a recommendation reflects the extent to which we can be confident that the composite desirable effects of a management strategy outweigh the composite undesirable effects. This article addresses GRADEs approach to determining the direction and strength of a recommendation. The GRADE describes the balance of desirable and undesirable outcomes of interest among alternative management strategies depending on four domains, namely estimates of effect for desirable and undesirable outcomes of interest, confidence in the estimates of effect, estimates of values and preferences, and resource use. Ultimately, guideline panels must use judgment in integrating these factors to make a strong or weak recommendation for or against an intervention.


BMJ | 2008

Use of GRADE grid to reach decisions on clinical practice guidelines when consensus is elusive

Roman Jaeschke; Gordon H. Guyatt; Phil Dellinger; Holger J. Schünemann; Mitchell M. Levy; Regina Kunz; Susan L. Norris; Julian Bion

The large and diverse nature of guideline committees can make consensus difficult. Roman Jaeschke and colleagues describe a simple technique for clarifying opinion


Journal of Clinical Epidemiology | 2013

GRADE guidelines: 12. Preparing summary of findings tables-binary outcomes.

Gordon H. Guyatt; Andrew D Oxman; Nancy Santesso; Mark Helfand; Gunn Elisabeth Vist; Regina Kunz; Jan Brozek; Susan L. Norris; Joerg J. Meerpohl; Ben Djulbegovic; Pablo Alonso-Coello; Piet N. Post; Jason W. Busse; Paul Glasziou; Robin Christensen; Holger J. Schünemann

Summary of Findings (SoF) tables present, for each of the seven (or fewer) most important outcomes, the following: the number of studies and number of participants; the confidence in effect estimates (quality of evidence); and the best estimates of relative and absolute effects. Potentially challenging choices in preparing SoF table include using direct evidence (which may have very few events) or indirect evidence (from a surrogate) as the best evidence for a treatment effect. If a surrogate is chosen, it must be labeled as substituting for the corresponding patient-important outcome. Another such choice is presenting evidence from low-quality randomized trials or high-quality observational studies. When in doubt, a reasonable approach is to present both sets of evidence; if the two bodies of evidence have similar quality but discrepant results, one would rate down further for inconsistency. For binary outcomes, relative risks (RRs) are the preferred measure of relative effect and, in most instances, are applied to the baseline or control group risks to generate absolute risks. Ideally, the baseline risks come from observational studies including representative patients and identifying easily measured prognostic factors that define groups at differing risk. In the absence of such studies, relevant randomized trials provide estimates of baseline risk. When confidence intervals (CIs) around the relative effect include no difference, one may simply state in the absolute risk column that results fail to show a difference, omit the point estimate and report only the CIs, or add a comment emphasizing the uncertainty associated with the point estimate.

Collaboration


Dive into the Susan L. Norris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge