Yngve Falck-Ytter
Case Western Reserve University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yngve Falck-Ytter.
BMJ | 2008
Gordon H. Guyatt; Andrew D Oxman; Gunn Elisabeth Vist; Regina Kunz; Yngve Falck-Ytter; Pablo Alonso-Coello; Holger J. Schünemann
Guidelines are inconsistent in how they rate the quality of evidence and the strength of recommendations. This article explores the advantages of the GRADE system, which is increasingly being adopted by organisations worldwide
BMJ | 2004
David Atkins; Dana Best; Peter A. Briss; Martin Eccles; Yngve Falck-Ytter; Signe Flottorp; Gordon H. Guyatt; Robin Harbour; Margaret C Haugh; David Henry; Suzanne Hill; Roman Jaeschke; Gillian Leng; Alessandro Liberati; Nicola Magrini; James Mason; Philippa Middleton; Jacek Mrukowicz; Dianne O'Connell; Andrew D Oxman; Bob Phillips; Holger J. Schünemann; Tessa Tan-Torres Edejer; Helena Varonen; Gunn E Vist; John W Williams; Stephanie Zaza
Abstract Users of clinical practice guidelines and other recommendations need to know how much confidence they can place in the recommendations. Systematic and explicit methods of making judgments can reduce errors and improve communication. We have developed a system for grading the quality of evidence and the strength of recommendations that can be applied across a wide range of interventions and contexts. In this article we present a summary of our approach from the perspective of a guideline user. Judgments about the strength of a recommendation require consideration of the balance between benefits and harms, the quality of the evidence, translation of the evidence into specific circumstances, and the certainty of the baseline risk. It is also important to consider costs (resource utilisation) before making a recommendation. Inconsistencies among systems for grading the quality of evidence and the strength of recommendations reduce their potential to facilitate critical appraisal and improve communication of these judgments. Our system for guiding these complex judgments balances the need for simplicity with the need for full and transparent consideration of all important issues. Clinical guidelines are only as good as the evidence and judgments they are based on. The GRADE approach aims to make it easier for users to assess the judgments behind recommendations
BMJ | 2008
Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; Gunn E Vist; Yngve Falck-Ytter; Holger J. Schünemann
Guideline developers use a bewildering variety of systems to rate the quality of the evidence underlying their recommendations. Some are facile, some confused, and others sophisticated but complex
BMJ | 2008
Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; Yngve Falck-Ytter; Gunn E Vist; Alessandro Liberati; Holger J. Schünemann
The GRADE system classifies recommendations made in guidelines as either strong or weak. This article explores the meaning of these descriptions and their implications for patients, clinicians, and policy makers
Journal of Clinical Epidemiology | 2011
Gordon H. Guyatt; Andrew D Oxman; Elie A. Akl; Regina Kunz; Gunn Elisabeth Vist; Jan Brozek; Susan L. Norris; Yngve Falck-Ytter; Paul Glasziou; Hans deBeer; Roman Jaeschke; David Rind; Joerg J. Meerpohl; Philipp Dahm; Holger J. Schünemann
This article is the first of a series providing guidance for use of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system of rating quality of evidence and grading strength of recommendations in systematic reviews, health technology assessments (HTAs), and clinical practice guidelines addressing alternative management options. The GRADE process begins with asking an explicit question, including specification of all important outcomes. After the evidence is collected and summarized, GRADE provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect. Recommendations are characterized as strong or weak (alternative terms conditional or discretionary) according to the quality of the supporting evidence and the balance between desirable and undesirable consequences of the alternative management options. GRADE suggests summarizing evidence in succinct, transparent, and informative summary of findings tables that show the quality of evidence and the magnitude of relative and absolute effects for each important outcome and/or as evidence profiles that provide, in addition, detailed information about the reason for the quality of evidence rating. Subsequent articles in this series will address GRADEs approach to formulating questions, assessing quality of evidence, and developing recommendations.
Journal of Clinical Epidemiology | 2011
Gordon H. Guyatt; Andrew D Oxman; Shahnaz Sultan; Paul Glasziou; Elie A. Akl; Pablo Alonso-Coello; David Atkins; Regina Kunz; Jan Brozek; Victor M. Montori; Roman Jaeschke; David Rind; Philipp Dahm; Joerg J. Meerpohl; Gunn Elisabeth Vist; Elise Berliner; Susan L. Norris; Yngve Falck-Ytter; M. Hassan Murad; Holger J. Schünemann
The most common reason for rating up the quality of evidence is a large effect. GRADE suggests considering rating up quality of evidence one level when methodologically rigorous observational studies show at least a two-fold reduction or increase in risk, and rating up two levels for at least a five-fold reduction or increase in risk. Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect. Other considerations include the rapidity of the response, the underlying trajectory of the condition, and indirect evidence.
Chest | 2012
Yngve Falck-Ytter; Charles W. Francis; Norman A. Johanson; Catherine Curley; Ola E. Dahl; Sam Schulman; Thomas L. Ortel; Stephen G. Pauker; Clifford W. Colwell
BACKGROUND VTE is a serious, but decreasing complication following major orthopedic surgery. This guideline focuses on optimal prophylaxis to reduce postoperative pulmonary embolism and DVT. METHODS The methods of this guideline follow those described in Methodology for the Development of Antithrombotic Therapy and Prevention of Thrombosis Guidelines: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines in this supplement. RESULTS In patients undergoing major orthopedic surgery, we recommend the use of one of the following rather than no antithrombotic prophylaxis: low-molecular-weight heparin; fondaparinux; dabigatran, apixaban, rivaroxaban (total hip arthroplasty or total knee arthroplasty but not hip fracture surgery); low-dose unfractionated heparin; adjusted-dose vitamin K antagonist; aspirin (all Grade 1B); or an intermittent pneumatic compression device (IPCD) (Grade 1C) for a minimum of 10 to 14 days. We suggest the use of low-molecular-weight heparin in preference to the other agents we have recommended as alternatives (Grade 2C/2B), and in patients receiving pharmacologic prophylaxis, we suggest adding an IPCD during the hospital stay (Grade 2C). We suggest extending thromboprophylaxis for up to 35 days (Grade 2B). In patients at increased bleeding risk, we suggest an IPCD or no prophylaxis (Grade 2C). In patients who decline injections, we recommend using apixaban or dabigatran (all Grade 1B). We suggest against using inferior vena cava filter placement for primary prevention in patients with contraindications to both pharmacologic and mechanical thromboprophylaxis (Grade 2C). We recommend against Doppler (or duplex) ultrasonography screening before hospital discharge (Grade 1B). For patients with isolated lower-extremity injuries requiring leg immobilization, we suggest no thromboprophylaxis (Grade 2B). For patients undergoing knee arthroscopy without a history of VTE, we suggest no thromboprophylaxis (Grade 2B). CONCLUSIONS Optimal strategies for thromboprophylaxis after major orthopedic surgery include pharmacologic and mechanical approaches.
Journal of Clinical Epidemiology | 2011
Gordon H. Guyatt; Andrew D Oxman; Gunn Elisabeth Vist; Regina Kunz; Jan Brozek; Pablo Alonso-Coello; Victor M. Montori; Elie A. Akl; Ben Djulbegovic; Yngve Falck-Ytter; Susan L. Norris; John W Williams; David Atkins; Joerg J. Meerpohl; Holger J. Schünemann
In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if most of the relevant evidence comes from studies that suffer from a high risk of bias. Well-established limitations of randomized trials include failure to conceal allocation, failure to blind, loss to follow-up, and failure to appropriately consider the intention-to-treat principle. More recently recognized limitations include stopping early for apparent benefit and selective reporting of outcomes according to the results. Key limitations of observational studies include use of inappropriate controls and failure to adequately adjust for prognostic imbalance. Risk of bias may vary across outcomes (e.g., loss to follow-up may be far less for all-cause mortality than for quality of life), a consideration that many systematic reviews ignore. In deciding whether to rate down for risk of bias--whether for randomized trials or observational studies--authors should not take an approach that averages across studies. Rather, for any individual outcome, when there are some studies with a high risk, and some with a low risk of bias, they should consider including only the studies with a lower risk of bias.
Journal of Clinical Epidemiology | 2011
Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; James Woodcock; Jan Brozek; Mark Helfand; Pablo Alonso-Coello; Paul Glasziou; Roman Jaeschke; Elie A. Akl; Susan L. Norris; Gunn Elisabeth Vist; Philipp Dahm; Vijay K. Shukla; Julian P. T. Higgins; Yngve Falck-Ytter; Holger J. Schünemann
This article deals with inconsistency of relative (rather than absolute) treatment effects in binary/dichotomous outcomes. A body of evidence is not rated up in quality if studies yield consistent results, but may be rated down in quality if inconsistent. Criteria for evaluating consistency include similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity and I(2). To explore heterogeneity, systematic review authors should generate and test a small number of a priori hypotheses related to patients, interventions, outcomes, and methodology. When inconsistency is large and unexplained, rating down quality for inconsistency is appropriate, particularly if some studies suggest substantial benefit, and others no effect or harm (rather than only large vs. small effects). Apparent subgroup effects may be spurious. Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction; subgroup comparisons come from within rather than between studies; tests of interaction generate low P-values; and have a biological rationale.
Journal of Clinical Epidemiology | 2011
Gordon H. Guyatt; Andrew D Oxman; Regina Kunz; James Woodcock; Jan Brozek; Mark Helfand; Pablo Alonso-Coello; Yngve Falck-Ytter; Roman Jaeschke; Gunn Elisabeth Vist; Elie A. Akl; Piet N. Post; Susan L. Norris; Joerg J. Meerpohl; Vijay K. Shukla; Mona Nasser; Holger J. Schünemann
Direct evidence comes from research that directly compares the interventions in which we are interested when applied to the populations in which we are interested and measures outcomes important to patients. Evidence can be indirect in one of four ways. First, patients may differ from those of interest (the term applicability is often used for this form of indirectness). Secondly, the intervention tested may differ from the intervention of interest. Decisions regarding indirectness of patients and interventions depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. Thirdly, outcomes may differ from those of primary interest-for instance, surrogate outcomes that are not themselves important, but measured in the presumption that changes in the surrogate reflect changes in an outcome important to patients. A fourth type of indirectness, conceptually different from the first three, occurs when clinicians must choose between interventions that have not been tested in head-to-head comparisons. Making comparisons between treatments under these circumstances requires specific statistical methods and will be rated down in quality one or two levels depending on the extent of differences between the patient populations, co-interventions, measurements of the outcome, and the methods of the trials of the candidate interventions.