Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Haynes is active.

Publication


Featured researches published by Brian Haynes.


BMJ | 2008

Improving the reporting of pragmatic trials: an extension of the CONSORT statement

Merrick Zwarenstein; Shaun Treweek; Joel Gagnier; Douglas G. Altman; Sean Tunis; Brian Haynes; Andrew D Oxman; David Moher

Background The CONSORT statement is intended to improve reporting of randomised controlled trials and focuses on minimising the risk of bias (internal validity). The applicability of a trial’s results (generalisability or external validity) is also important, particularly for pragmatic trials. A pragmatic trial (a term first used in 1967 by Schwartz and Lellouch) can be broadly defined as a randomised controlled trial whose purpose is to inform decisions about practice. This extension of the CONSORT statement is intended to improve the reporting of such trials and focuses on applicability. Methods At two, two-day meetings held in Toronto in 2005 and 2008, we reviewed the CONSORT statement and its extensions, the literature on pragmatic trials and applicability, and our experiences in conducting pragmatic trials. Recommendations We recommend extending eight CONSORT checklist items for reporting of pragmatic trials: the background, participants, interventions, outcomes, sample size, blinding, participant flow, and generalisability of the findings. These extensions are presented, along with illustrative examples of reporting, and an explanation of each extension. Adherence to these reporting criteria will make it easier for decision makers to judge how applicable the results of randomised controlled trials are to their own conditions. Empirical studies are needed to ascertain the usefulness and comprehensiveness of these CONSORT checklist item extensions. In the meantime we recommend that those who support, conduct, and report pragmatic trials should use this extension of the CONSORT statement to facilitate the use of trial results in decisions about health care. Pragmatic trials are designed to inform decisions about practice, but poor reporting can reduce their usefulness. The CONSORT and Practihc groups describe modifications to the CONSORT guidelines to help readers assess the applicability of the results


BMJ | 2005

Need for expertise based randomised controlled trials

P.J. Devereaux; Mohit Bhandari; Mike Clarke; Victor M. Montori; Deborah J. Cook; Salim Yusuf; David L. Sackett; Claudio S. Cinà; S.D. Walter; Brian Haynes; Holger J. Schünemann; Geoffrey R. Norman; Gordon H. Guyatt

Surgical procedures are less likely to be rigorously evidence based than drug treatments because of difficulties with randomisation. Expertise based trials could be the way forward Although conventional randomised controlled trials are widely recognised as the most reliable method to evaluate pharmacological interventions,1 2 scepticism about their role in nonpharmacological interventions (such as surgery) remains.3–6 Conventional randomised controlled trials typically randomise participants to one of two intervenions (A or B) and individual clinicians give intervention A to some participants and B to others. An alternative trial design, the expertise based randomised controlled trial, randomises participants to clinicians with expertise in intervention A or clinicians with expertise in intervention B, and the clinicians perform only the procedure they are expert in. We present evidence to support our argument that increased use of the expertise based design will enhance the validity, applicability, feasibility, and ethical integrity of randomised controlled trials in surgery, as well as in other areas. We focus on established surgical interventions rather than new surgical procedures in which clinicians have not established expertise. Investigators have used the expertise based design when conventional randomised controlled trials were impossible because different specialty groups provided the interventions under evaluation—for example, percutaneous transluminal coronary angioplasty versus coronary artery bypass graft surgery.7–9 In 1980, Van der Linden suggested randomising participants to clinicians committed to performing different interventions in an area in which a conventional randomised controlled trial was possible.10 Since that time, however, the expertise based design has been little used, even in areas where it has high potential (such as, surgery, physiotherapy, and chiropractic). ### Differential expertise between procedures Because it takes training and experience to develop expertise in surgical interventions, individual surgeons tend to solely or primarily use a single surgical approach to treat a specific problem.10 11 …


BMJ | 1999

CAN IT WORK? DOES IT WORK? IS IT WORTH IT? THE TESTING OF HEALTHCARE INTERVENTIONS IS EVOLVING

Brian Haynes

General practice p 676 The British pioneer clinical epidemiologist Archie Cochrane defined three concepts related to testing healthcare interventions.1 Efficacy is the extent to which an intervention does more good than harm under ideal circumstances (“Can it work?”). Effectiveness assesses whether an intervention does more good than harm when provided under usual circumstances of healthcare practice (“Does it work in practice?”). Efficiency measures the effect of an intervention in relation to the resources it consumes (“Is it worth it?”). Trials of efficacy and effectiveness have also been described as explanatory and management trials, respectively,2 and efficiency trials are more often called cost effectiveness or cost benefit studies. Almost all clinical trials assess efficacy. Such trials typically select patients who are carefully diagnosed; are at highest risk of adverse outcomes from the disease in question; lack other serious illnesses; and are most likely to follow and respond to the treatment of interest. This treatment will be prescribed by doctors who are most likely to …


Evidence-Based Nursing | 2005

The paths from research to improved health outcomes

Paul Glasziou; Brian Haynes

Evidence-based practice aims to provide clinicians and patients with choices about the most effective care based on the best available research evidence. To patients, this is a natural expectation. To clinicians, this is a near impossible dream. The US report Bridging the quality chasm has documented and drawn attention to the gap between what we know and what we do.1 The report identified 3 types of quality problems—overuse, underuse, and misuse. It suggested “The burden of harm conveyed by the collective impact of all of our healthcare quality problems is staggering.” Although attention has focused on misuse (or error), a larger portion of the preventable burden is likely to be the evidence-practice gaps of underuse and overuse. Research that should change practice is often ignored for years—for example, crystalloid (rather than colloid) for shock,2 supine position after lumbar puncture,3 bed rest for any medical condition,3 and appropriate use of anticoagulants and aspirin in patients with atrial fibrillation.4 Antman et al documented the substantial delays between cardiovascular trial results and textbook recommendations.5 However, even when best practices are well known, they are often poorly implemented: national surveys show that most hypertensive patients are undetected, untreated, or inadequately controlled,6 which has led to the current interest in knowledge translation.7 What role does evidence-based practice 8 have in bridging the research-practice gap? Surveys of clinicians suggest that a major barrier to using current research evidence is the time, effort, and skills needed to access the right information among the massive volumes of research.9 Even for a (mythical) up to date clinician, the problem of maintaining currency is immense. Each year Medline indexes >560 000 new articles, and Cochrane Central adds about 20 000 new randomised trials. This is about 1500 new articles and 55 …


BMJ | 2004

Evidence based medicine has come a long way

Gordon H. Guyatt; Deborah J. Cook; Brian Haynes

The second decade will be as exciting as the first Evidence based medicine seeks to empower clinicians so that they can develop independent views regarding medical claims and controversies. Although many helped to lay the foundations of evidence based medicine,1 Archie Cochranes insistence that clinical disciplines summarise evidence concerning their practices, Alvan Feinsteins role in defining the principles of quantitative clinical reasoning, and David Sacketts innovation in teaching critical appraisal all proved seminal. The term evidence based medicine,2 and the first comprehensive description of its tenets, appeared little more than a decade ago. In its original formulation, this discipline reduced the emphasis on unsystematic clinical experience and pathophysiological rationale, and promoted the examination of evidence from clinical research. Evidence based medicine therefore required new skills including efficient literature searching and the application of formal rules of evidence in evaluating the clinical literature. Important developments in evidence based medicine over the subsequent decade included the increasing popularity of structured abstracts3 and secondary journals summarising …


Implementation Science | 2012

The role of organizational context and individual nurse characteristics in explaining variation in use of information technologies in evidence based practice

Diane Doran; Brian Haynes; Carole A. Estabrooks; Andre W. Kushniruk; Adam Dubrowski; Irmajean Bajnok; Linda McGillis Hall; Mingyang Li; Jennifer Carryer; Dawn Jedras; Yu Qing Bai

BackgroundThere is growing awareness of the role of information technology in evidence-based practice. The purpose of this study was to investigate the role of organizational context and nurse characteristics in explaining variation in nurses’ use of personal digital assistants (PDAs) and mobile Tablet PCs for accessing evidence-based information. The Promoting Action on Research Implementation in Health Services (PARIHS) model provided the framework for studying the impact of providing nurses with PDA-supported, evidence-based practice resources, and for studying the organizational, technological, and human resource variables that impact nurses’ use patterns.MethodsA survey design was used, involving baseline and follow-up questionnaires. The setting included 24 organizations representing three sectors: hospitals, long-term care (LTC) facilities, and community organizations (home care and public health). The sample consisted of 710 participants (response rate 58%) at Time 1, and 469 for whom both Time 1 and Time 2 follow-up data were obtained (response rate 66%). A hierarchical regression model (HLM) was used to evaluate the effect of predictors from all levels simultaneously.ResultsThe Chi square result indicated PDA users reported using their device more frequently than Tablet PC users (p = 0.001). Frequency of device use was explained by ‘breadth of device functions’ and PDA versus Tablet PC. Frequency of Best Practice Guideline use was explained by ‘willingness to implement research,’ ‘structural and electronic resources,’ ‘organizational slack time,’ ‘breadth of device functions’ (positive effects), and ‘slack staff’ (negative effect). Frequency of Nursing Plus database use was explained by ‘culture,’ ‘structural and electronic resources,’ and ‘breadth of device functions’ (positive effects), and ‘slack staff’ (negative). ‘Organizational culture’ (positive), ‘breadth of device functions’ (positive), and ‘slack staff ‘(negative) were associated with frequency of Lexi/PEPID drug dictionary use.ConclusionAccess to PDAs and Tablet PCs supported nurses’ self-reported use of information resources. Several of the organizational context variables and one individual nurse variable explained variation in the frequency of information resource use.


Evidence-based Medicine | 2008

Less is more: where do the abstracts in the EBM journal come from?

Angela Eady; Paul Glasziou; Brian Haynes

Every 2 months, a brand new issue of Evidence-Based Medicine containing 20 abstracts is published. The compact appearance belies the mounds of sifting required to arrive at the finished journal. Readers should know, though, that the process for winnowing these select articles from the published literature is systematic and thorough. This article describes the 2-stage process of validity checks followed by a rating by clinicians for relevance that is used to selected articles. Journal staff hand search 140 journals and examine each original study or review. This hand searching amounts to 60 000 articles that are annually assessed. The 140 journals include the highly rated general medical journals ( Lancet , N Eng J Med , JAMA , BMJ , CMAJ , Ann Int Med , and Cochrane reviews), plus various speciality journals that are selected on the basis of article yield—we …


Evidence-based Medicine | 2016

Letter in reply to ‘Evidence pyramids’ from Dr Kaufmann

Brian S. Alper; Brian Haynes

Dr Kaufmann1 suggests that readers should use the ‘New Evidence Pyramid’ which conveys a hierarchy of quality across primary studies2 rather than the ‘EBHC Pyramid 5.0’ which conveys a hierarchy of comprehensiveness across types of information resources (primary studies, systematic reviews, guidelines and synthesised summaries for clinical reference).3 However, these pyramids are complementary and address very different issues. The hierarchy of validity of primary studies (and the recognition that quality varies so the hierarchy …


The Open Cardiovascular Medicine Journal | 2014

Gaps in Medical and Device Therapy for Patients with Left Ventricular Systolic Dysfunction: The EchoGap Study.

Hisham Dokainish; Lauren Jewett; Robby Nieuwlaat; Joshua Coulson; Catherine Demers; Eva Lonn; Jeff S. Healey; Brian Haynes; Stuart J. Connolly

Objectives: To assess gaps between guidelines and medicine prescription/dosing and referral for defibrillator therapy in patients with left ventricular systolic dysfunction (LVSD). Methods: Outpatient echocardiography reports at an academic hospital centre were screened and outpatients with LVEF<40% were included. A questionnaire was mailed to the patients’ physician, querying prescription/dosing of ACE-inhibitors (ACEi), angiotensin receptor blockers (ARB) and beta-blockers (BB). Patients with LVEF<30% had additional questions on implantable cardiac defibrillator (ICD) referral. Results: Mean age was 69.6+/-12.2 years and mean LVEF was 29.7+/-6.5%. ACEi and/or ARB prescription rate was 260/309(84.1%) versus 256/308(83.1%) for BB (p=NS for comparison). Of patients on ACEi, 77/183(42.1%) were on target dose, compared to 7/45(15.5%) for ARB and 9/254(3.5%) for BB (p<0.01). Of 171/309 patients (55.3%) with LVEF<30%, 72/171(42.1%) had an ICD and 16/171(9.4%) were referred for one. Conclusion: Prescription rates of evidence-based HF medicines are relatively high in outpatients with LVSD referred for echocardiography at this Canadian academic medical centre; however, the proportion of patients at target doses was modest for ACEi and low for ARB and BB. Approximately half of patients who qualify for ICD by EF alone have one or were referred. Important reasons for patients with LVSD not on evidence-based therapy were identified.


Evidence-Based Nursing | 2007

Star ratings—a new feature coming soon to EB Nursing!

Donna Ciliska; Brian Haynes

E vidence-Based Nursing presents a new feature to guide your reading decisions. In upcoming issues of the journal, each abstract featured will include star ratings representing clinical relevance and newsworthiness as judged by nurses. These ratings are the averaged scores for clinical relevance and newsworthiness from MORE-EBN (McMaster Online Rating of Evidence–Evidence-Based Nursing). This internet-based system gathers ratings of new articles (that meet specific methodologic criteria) from our sentinel readers, a panel of >1800 nurses from around the world. Sentinel …

Collaboration


Dive into the Brian Haynes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hertzel C. Gerstein

Population Health Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge