Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Craig Ramsay is active.

Publication


Featured researches published by Craig Ramsay.


BMJ | 2016

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

Jonathan A C Sterne; Miguel A. Hernán; Barnaby C Reeves; Jelena Savovic; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G. Altman; Mohammed T Ansari; Isabelle Boutron; James Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K. Loke; Theresa D Pigott; Craig Ramsay; Deborah Regidor; Hannah R. Rothstein; Lakhbir Sandhu; Pasqualina Santaguida; Holger J. Schunemann; B. Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C. Valentine

Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.


Quality & Safety in Health Care | 2003

Research designs for studies evaluating the effectiveness of change and improvement strategies

Martin Eccles; Jeremy Grimshaw; Marion K Campbell; Craig Ramsay

The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered. Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of experimental designs are discussed including single and multiple arm randomised controlled trials and the use of more complex factorial and block designs. The impact of randomisation at both group and individual levels and three non-experimental designs (uncontrolled before and after, controlled before and after, and time series analysis) are also considered. The design chosen will reflect both the needs (and resources) in any particular circumstances and also the purpose of the evaluation. The general principle underlying the choice of evaluative design is, however, simple—those conducting such evaluations should use the most robust design possible to minimise bias and maximise generalisability.


Quality & Safety in Health Care | 2003

Systematic reviews of the effectiveness of quality improvement strategies and programmes

Jeremy Grimshaw; L M McAuley; Lisa Bero; Roberto Grilli; Andrew D Oxman; Craig Ramsay; L Vale; Merrick Zwarenstein

Systematic reviews provide the best evidence on the effectiveness of healthcare interventions including quality improvement strategies. The methods of systematic review of individual patient randomised trials of healthcare interventions are well developed. We discuss methodological and practice issues that need to be considered when undertaking systematic reviews of quality improvement strategies including developing a review protocol, identifying and screening evidence sources, quality assessment and data abstraction, analytical methods, reporting systematic reviews, and appraising systematic reviews. This paper builds on our experiences within the Cochrane Effective Practice and Organisation of Care (EPOC) review group.


Journal of General Internal Medicine | 2006

Toward Evidence-Based Quality Improvement

Jeremy Grimshaw; Martin Eccles; Re Thomas; Graeme MacLennan; Craig Ramsay; Cynthia Fraser; Luke Vale

OBJECTIVES: To determine effectiveness and costs of different guideline dissemination and implementation strategies. DATA SOURCES: MEDLINE (1966 to 1998), HEALTH-STAR (1975 to 1998), Cochrane Controlled Trial Register (4th edn 1998), EMBASE (1980 to 1998), SIGLE (1980 to 1988), and the specialized register of the Cochrane Effective Practice and Organisation of Care group. REVIEW METHODS: INCLUSION CRITERIA: Randomized-controlled trials, controlled clinical trials, controlled before and after studies, and interrupted time series evaluating guideline dissemination and implementation strategies targeting medically qualified health care professionals that reported objective measures of provider behavior and/or patient outcome. Two reviewers independently abstracted data on the methodologic quality of the studies, characteristics of study setting, participants, targeted behaviors, and interventions. We derived single estimates of dichotomous process variables (e.g., proportion of patients receiving appropriate treatment) for each study comparison and reported the median and range of effect sizes observed by study group and other quality criteria. RESULTS: We included 309 comparisons derived from 235 studies. The overall quality of the studies was poor. Seventy-three percent of comparisons evaluated multi-faceted interventions. Overall, the majority of comparisons (86.6%) observed improvements in care; for example, the median absolute improvement in performance across interventions ranged from 14.1% in 14 cluster-randomized comparisons of reminders, 8.1% in 4 cluster-randomized comparisons of dissemination of educational materials, 7.0% in 5 cluster-randomized comparisons of audit and feedback, and 6.0% in 13 cluster-randomized comparisons of multifaceted interventions involving educational outreach. We found no relationship between the number of components and the effects of multifaceted interventions. Only 29.4% of comparisons reported any economic data. CONCLUSIONS: Current guideline dissemination and implementation strategies can lead to improvements in care within the context of rigorous evaluative studies. However, there is an imperfect evidence base to support decisions about which guideline dissemination and implementation strategies are likely to be efficient under different circumstances. Decision makers need to use considerable judgment about how best to use the limited resources they have for quality improvement activities.


Canadian Medical Association Journal | 2010

Effect of point-of-care computer reminders on physician behaviour: a systematic review

Kaveh G. Shojania; Alison Jennings; Alain Mayhew; Craig Ramsay; Martin Eccles; Jeremy Grimshaw

Background: The opportunity to improve care using computer reminders is one of the main incentives for implementing sophisticated clinical information systems. We conducted a systematic review to quantify the expected magnitude of improvements in processes of care from computer reminders delivered to clinicians during their routine activities. Methods: We searched the MEDLINE, Embase and CINAHL databases (to July 2008) and scanned the bibliographies of retrieved articles. We included studies in our review if they used a randomized or quasi-randomized design to evaluate improvements in processes or outcomes of care from computer reminders delivered to physicians during routine electronic ordering or charting activities. Results: Among the 28 trials (reporting 32 comparisons) included in our study, we found that computer reminders improved adherence to processes of care by a median of 4.2% (interquartile range [IQR] 0.8%–18.8%). Using the best outcome from each study, we found that the median improvement was 5.6% (IQR 2.0%–19.2%). A minority of studies reported larger effects; however, no study characteristic or reminder feature significantly predicted the magnitude of effect except in one institution, where a well-developed, “homegrown” clinical information system achieved larger improvements than in all other studies (median 16.8% [IQR 8.7%–26.0%] v. 3.0% [IQR 0.5%–11.5%]; p = 0.04). A trend toward larger improvements was seen for reminders that required users to enter a response (median 12.9% [IQR 2.7%–22.8%] v. 2.7% [IQR 0.6%–5.6%]; p = 0.09). Interpretation: Computer reminders produced much smaller improvements than those generally expected from the implementation of computerized order entry and electronic medical record systems. Further research is required to identify features of reminder systems consistently associated with clinically worthwhile improvements.


International Journal of Technology Assessment in Health Care | 2005

Effectiveness and efficiency of guideline dissemination and implementation strategies

Jeremy Grimshaw; Re Thomas; Graeme MacLennan; Cynthia Fraser; Craig Ramsay; Luke Vale; Paula Whitty; Martin Eccles; L. Matowe; L. Shirran; M.J.P. Wensing; R.F. Dijkstra; Cam Donaldson

Objectives: A systematic review of the effectiveness and costs of different guideline development, dissemination, and implementation strategies wasundertaken. The resource implications of these strategies was estimated, and a framework for deciding when it is efficient to develop and introduce clinical guidelines was developed.


Computers in Biology and Medicine | 2004

Sample size calculator for cluster randomized trials

Marion K Campbell; Sean Thomson; Craig Ramsay; Graeme MacLennan; Jeremy Grimshaw

Cluster randomized trials, where individuals are randomized in groups are increasingly being used in healthcare evaluation. The adoption of a clustered design has implications for design, conduct and analysis of studies. In particular, standard sample sizes have to be inflated for cluster designs, as outcomes for individuals within clusters may be correlated; inflation can be achieved either by increasing the cluster size or by increasing the number of clusters in the study. A sample size calculator is presented for calculating appropriate sample sizes for cluster trials, whilst allowing the implications of both methods of inflation to be considered.


Clinical Trials | 2004

Statistical evaluation of learning curve effects in surgical trials.

Jonathan Cook; Craig Ramsay; Peter Fayers

Randomized controlled trials (RCTs) in surgery have been impeded by concerns that improvements in the technical performance of a new technique over time (a “learning curve”) may distort comparisons. The statistical assessment of learning curves in trials has received little attention. In this paper, we discuss what a learning curve effect is, the factors which effect it, how to display it, and how to incorporate the learning effect into the trial analysis. Bayesian hierarchical models are proposed to adjust the trial results for the existence of a learning curve effect. The implications for trial evaluation and data collection are considered.


Emerging Infectious Diseases | 2006

Systematic Review of Antimicrobial Drug Prescribing in Hospitals

Peter Davey; Erwin Brown; Lynda Fenelon; Roger Finch; Ian M. Gould; Alison Holmes; Craig Ramsay; Eric Taylor; Phil J. Wiffen; Mark H. Wilcox

Standardizing methods and reporting could improve interventions that reduce Clostridium difficile–associated diarrhea and antimicrobial drug resistance.


Health Technology Assessment | 2012

Systematic review and economic modelling of the relative clinical benefit and cost-effectiveness of laparoscopic surgery and robotic surgery for removal of the prostate in men with localised prostate cancer

Craig Ramsay; Robert Pickard; Clare Robertson; Andrew Close; Luke Vale; Natalie Armstrong; D. A. Barocas; C. G. Eden; Cynthia Fraser; Tara Gurung; David Jenkinson; Xueli Jia; Thomas Lam; G Mowatt; David E. Neal; M. C. Robinson; J. Royle; Steve Rushton; Pawana Sharma; Mark Shirley; Naeem Soomro

BACKGROUND Complete surgical removal of the prostate, radical prostatectomy, is the most frequently used treatment option for men with localised prostate cancer. The use of laparoscopic (keyhole) and robot-assisted surgery has improved operative safety but the comparative effectiveness and cost-effectiveness of these options remains uncertain. OBJECTIVE This study aimed to determine the relative clinical effectiveness and cost-effectiveness of robotic radical prostatectomy compared with laparoscopic radical prostatectomy in the treatment of localised prostate cancer within the UK NHS. DATA SOURCES MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, BIOSIS, Science Citation Index and Cochrane Central Register of Controlled Trials were searched from January 1995 until October 2010 for primary studies. Conference abstracts from meetings of the European, American and British Urological Associations were also searched. Costs were obtained from NHS sources and the manufacturer of the robotic system. Economic model parameters and distributions not obtained in the systematic review were derived from other literature sources and an advisory expert panel. REVIEW METHODS Evidence was considered from randomised controlled trials (RCTs) and non-randomised comparative studies of men with clinically localised prostate cancer (cT1 or cT2); outcome measures included adverse events, cancer related, functional, patient driven and descriptors of care. Two reviewers abstracted data and assessed the risk of bias of the included studies. For meta-analyses, a Bayesian indirect mixed-treatment comparison was used. Cost-effectiveness was assessed using a discrete-event simulation model. RESULTS The searches identified 2722 potentially relevant titles and abstracts, from which 914 reports were selected for full-text eligibility screening. Of these, data were included from 19,064 patients across one RCT and 57 non-randomised comparative studies, with very few studies considered at low risk of bias. The results of this study, although associated with some uncertainty, demonstrated that the outcomes were generally better for robotic than for laparoscopic surgery for major adverse events such as blood transfusion and organ injury rates and for rate of failure to remove the cancer (positive margin) (odds ratio 0.69; 95% credible interval 0.51 to 0.96; probability outcome favours robotic prostatectomy = 0.987). The predicted probability of a positive margin was 17.6% following robotic prostatectomy compared with 23.6% for laparoscopic prostatectomy. Restriction of the meta-analysis to studies at low risk of bias did not change the direction of effect but did decrease the precision of the effect size. There was no evidence of differences in cancer-related, patient-driven or dysfunction outcomes. The results of the economic evaluation suggested that when the difference in positive margins is equivalent to the estimates in the meta-analysis of all included studies, robotic radical prostatectomy was on average associated with an incremental cost per quality-adjusted life-year that is less than threshold values typically adopted by the NHS (£30,000) and becomes further reduced when the surgical capacity is high. LIMITATIONS The main limitations were the quantity and quality of the data available on cancer-related outcomes and dysfunction. CONCLUSIONS This study demonstrated that robotic prostatectomy had lower perioperative morbidity and a reduced risk of a positive surgical margin compared with laparoscopic prostatectomy although there was considerable uncertainty. Robotic prostatectomy will always be more costly to the NHS because of the fixed capital and maintenance charges for the robotic system. Our modelling showed that this excess cost can be reduced if capital costs of equipment are minimised and by maintaining a high case volume for each robotic system of at least 100-150 procedures per year. This finding was primarily driven by a difference in positive margin rate. There is a need for further research to establish how positive margin rates impact on long-term outcomes. FUNDING The National Institute for Health Research Health Technology Assessment programme.

Collaboration


Dive into the Craig Ramsay's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Burr

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Elders

Glasgow Caledonian University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge