Heart | 2021

Cardiovascular professional societies fall short in providing impartial, clear and evidence-based guidelines

 
 
 

Abstract


Clinical guidelines play an increasingly important role in care of patients with cardiovascular disease. Approaches to guideline development reflect the need to integrate a complex and everexpanding evidence base with new treatment options and clinical expertise to formulate recommendations that then can be implemented both by individual healthcare providers and across healthcare systems. All guidelines for a specific disease condition start with the same evidence base, yet guidelines are developed in many different ways, by many different organisations, often addressing the same or overlapping types of cardiovascular disease, typically leading to at least subtle (and sometimes major) divergences in the resultant recommendations. Professional society recommendations, such as those generated by the European Society of Cardiology (ESC) and by the American Heart Association/American College of Cardiology (AHA/ACC), predominate, but many geographic regions have their own guidelines, tailoring recommendations to specific regional requirements. Government agencies and insurance providers also generate guidelines either directly in published documents or indirectly by restricting reimbursement. Online medical textbooks, such as UptoDate, attempt to integrate and reconcile recommendations from multiple guideline sources, filling any gaps in clinical management with recommendations based on clinical expertise alone. Another approach is to convene an independent group of experts to address new practice changing evidence rapidly, focusing on a specific question, such as the BMJ Rapid Recs or Magic Evidence Ecosystem Foundation. 3 Why are there so many guidelines? What are the limitations of our current approach? How can we optimise guideline development to improve care of patients with cardiovascular disease? All guidelines share two common purposes: first, to review, assess quality, summarise and interpret the published evidence base, and second, to provide clear recommendations for patient management. Other goals may differ between guidelines, such as balancing the good of the individual patient versus population health, considerations of costeffectiveness, the scope of the document and whether clinical expertise is used to address issues for which the scientific evidence base is insufficient. Guidelines also differ in the processes used to develop them including composition of the writing committee, input from stakeholders, management of potential conflicts of interest and the process for reviewing evidence and developing recommendations. International checklists that summarise best practices for guideline development are available, but current guideline development and publication often fail to meet these standards (figure 1). 4 In this issue of Heart, Garbi presents a detailed description of the National Institute for Health and Care Excellence (NICE) clinical guideline development process. NICE is an independent public body that provides evidencebased guidelines to inform care provided within the English National Health Service. This article provides a thorough and transparent narrative of the process for appointing an advisory committee, determining the scope of each guideline and reviewing the clinical evidence. Compared with professional society guidelines, NICE gets it right on several fronts but also describes where opportunities lie for everyone to improve: 1. Keep the cardiovascular specialists out of the room! Ensuring there are no direct financial or industry conflicts of interest is not enough; content experts are subject to implicit bias in interpretation of data, personal clinical practice preferences and academic conflicts of interest. Although experts help frame the questions and have an opportunity to comment later, the evidence is systematically evaluated and analysed by independent experts. 2. All types of healthcare professionals as well as other stakeholders are involved in the process—patients, administrators, advocates and members of the public–with support from project managers and information specialists, as well as experts in systematic reviews and health economics. The chair facilitates careful consideration of the data and ensures balanced participation of all committee members; the ultimate decisions are very much on a level playing field. 3. There are strict criteria about how the evidence is systematically brought together, analysed and judged. Professional society guidelines are much less strict and often use the ‘we know the evidence’ argument. Typically, there is no description of the specific processes or methodology for systematic evidence evaluation. 4. NICE guidance does not try to keep everyone happy, instead making recommendations based on clinical efficacy and costeffectiveness. NICE guidance aims to reduce health inequalities, promoting the health of the population, not just the individual, and is intended for a wide audience, not just healthcare professionals, using language accessible to patients and families. 5. Unlike the somewhat secretive process used by professional societies, the NICE process is extremely transparent, and everyone is allowed to see how the recommendations were derived and the conclusions made. The NICE guideline development process should give us pause to consider more thoughtfully the scope of our professional society guidelines, the composition of our writing committees, the rigour of our evidence review and analysis, our approaches to determining the quality of the evidence and strength of a recommendation, and the intended audience for the recommendations. A particularly challenging aspect of guideline development is determining the quality (or level) of the evidence underlying each recommendation. The most stringent approach to evaluating clinical evidence is the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system. The quality of evidence is rated as high, moderate, low or very low quality with explicit criteria used to upgrade or downgrade the rating. Framing the evidence evaluation in a patient, intervention, comparator and Division of Cardiology, University of Washington, Seattle, Washington, USA Department of Medicine/Cardiology, University of Washington, Seattle, Washington, USA Centre for Cardiovascular Sciences, University of Edinburgh, Edinburgh, UK

Volume 107
Pages 940 - 942
DOI 10.1136/heartjnl-2021-319176
Language English
Journal Heart

Full Text