Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mike Clarke is active.

Publication


Featured researches published by Mike Clarke.


BMJ | 2009

The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration

Alessandro Liberati; Douglas G. Altman; Jennifer Tetzlaff; Cynthia D. Mulrow; Peter C Gøtzsche; John P. A. Ioannidis; Mike Clarke; P. J. Devereaux; Jos Kleijnen; David Moher

Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.


PLOS Medicine | 2009

The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration.

Alessandro Liberati; Douglas G. Altman; Jennifer Tetzlaff; Cynthia D. Mulrow; Peter C Gøtzsche; John P. A. Ioannidis; Mike Clarke; P. J. Devereaux; Jos Kleijnen; David Moher

Alessandro Liberati and colleagues present an Explanation and Elaboration of the PRISMA Statement, updated guidelines for the reporting of systematic reviews and meta-analyses.


Systematic Reviews | 2015

Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement

David Moher; Larissa Shamseer; Mike Clarke; Davina Ghersi; Alessandro Liberati; Mark Petticrew; Paul G. Shekelle; Lesley Stewart

Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.


The Lancet | 2005

International subarachnoid aneurysm trial (ISAT) of neurosurgical clipping versus endovascular coiling in 2143 patients with ruptured intracranial aneurysms: a randomised comparison of effects on survival, dependency, seizures, rebleeding, subgroups, and aneurysm occlusion

Andrew Molyneux; Richard Kerr; Ly-Mee Yu; Mike Clarke; Mary Sneade; Julia Yarnold; Peter Sandercock

BACKGROUND Two types of treatment are being used for patients with ruptured intracranial aneurysms: endovascular detachable-coil treatment or craniotomy and clipping. We undertook a randomised, multicentre trial to compare these treatments in patients who were suitable for either treatment because the relative safety and efficacy of these approaches had not been established. Here we present clinical outcomes 1 year after treatment. METHODS 2143 patients with ruptured intracranial aneurysms, who were admitted to 42 neurosurgical centres, mainly in the UK and Europe, took part in the trial. They were randomly assigned to neurosurgical clipping (n=1070) or endovascular coiling (n=1073). The primary outcome was death or dependence at 1 year (defined by a modified Rankin scale of 3-6). Secondary outcomes included rebleeding from the treated aneurysm and risk of seizures. Long-term follow up continues. Analysis was in accordance with the randomised treatment. FINDINGS We report the 1-year outcomes for 1063 of 1073 patients allocated to endovascular treatment, and 1055 of 1070 patients allocated to neurosurgical treatment. 250 (23.5%) of 1063 patients allocated to endovascular treatment were dead or dependent at 1 year, compared with 326 (30.9%) of 1055 patients allocated to neurosurgery, an absolute risk reduction of 7.4% (95% CI 3.6-11.2, p=0.0001). The early survival advantage was maintained for up to 7 years and was significant (log rank p=0.03). The risk of epilepsy was substantially lower in patients allocated to endovascular treatment, but the risk of late rebleeding was higher. INTERPRETATION In patients with ruptured intracranial aneurysms suitable for both treatments, endovascular coiling is more likely to result in independent survival at 1 year than neurosurgical clipping; the survival benefit continues for at least 7 years. The risk of late rebleeding is low, but is more common after endovascular coiling than after neurosurgical clipping.


BMJ | 2002

Increasing response rates to postal questionnaires: systematic review

Phil Edwards; Ian Roberts; Mike Clarke; Carolyn DiGuiseppi; Sarah Pratap; Reinhard Wentz; Irene Kwan

Abstract Objective: To identify methods to increase response to postal questionnaires. Design: Systematic review of randomised controlled trials of any method to influence response to postal questionnaires. Studies reviewed: 292 randomised controlled trials including 258 315 participants Intervention reviewed: 75 strategies for influencing response to postal questionnaires. Main outcome measure: The proportion of completed or partially completed questionnaires returned. Results: The odds of response were more than doubled when a monetary incentive was used (odds ratio 2.02; 95% confidence interval 1.79 to 2.27) and almost doubled when incentives were not conditional on response (1.71; 1.29 to 2.26). Response was more likely when short questionnaires were used (1.86; 1.55 to 2.24). Personalised questionnaires and letters increased response (1.16; 1.06 to 1.28), as did the use of coloured ink (1.39; 1.16 to 1.67). The odds of response were more than doubled when the questionnaires were sent by recorded delivery (2.21; 1.51 to 3.25) and increased when stamped return envelopes were used (1.26; 1.13 to 1.41) and questionnaires were sent by first class post (1.12; 1.02 to 1.23). Contacting participants before sending questionnaires increased response (1.54; 1.24 to 1.92), as did follow up contact (1.44; 1.22 to 1.70) and providing non-respondents with a second copy of the questionnaire (1.41; 1.02 to 1.94). Questionnaires designed to be of more interest to participants were more likely to be returned (2.44; 1.99 to 3.01), but questionnaires containing questions of a sensitive nature were less likely to be returned (0.92; 0.87 to 0.98). Questionnaires originating from universities were more likely to be returned than were questionnaires from other sources, such as commercial organisations (1.31; 1.11 to 1.54). Conclusions: Health researchers using postal questionnaires can improve the quality of their research by using the strategies shown to be effective in this systematic review. What is already known on this topic Postal questionnaires are widely used in the collection of data in epidemiological studies and health research Non-response to postal questionnaires reduces the effective sample size and can introduce bias What this study adds This systematic review includes more randomised controlled trials than any previously published review or meta-analysis no questionnaire response The review has identified effective ways to increase response to postal questionnaires The review will be updated regularly in the Cochrane Library


The Lancet | 2009

Recombinant human erythropoiesis-stimulating agents and mortality in patients with cancer: a meta-analysis of randomised trials.

Julia Bohlius; Kurt Schmidlin; Corinne Brillant; Guido Schwarzer; Sven Trelle; Jerome Seidenfeld; Marcel Zwahlen; Mike Clarke; Olaf Weingart; Sabine Kluge; Margaret Piper; Dirk Rades; David P. Steensma; Benjamin Djulbegovic; Martin F Fey; Isabelle Ray‐Coquard; Mitchell Machtay; Volker Moebus; Gillian Thomas; Michael Untch; Martin Schumacher; Matthias Egger; Andreas Engert

BACKGROUND Erythropoiesis-stimulating agents reduce anaemia in patients with cancer and could improve their quality of life, but these drugs might increase mortality. We therefore did a meta-analysis of randomised controlled trials in which these drugs plus red blood cell transfusions were compared with transfusion alone for prophylaxis or treatment of anaemia in patients with cancer. METHODS Data for patients treated with epoetin alfa, epoetin beta, or darbepoetin alfa were obtained and analysed by independent statisticians using fixed-effects and random-effects meta-analysis. Analyses were by intention to treat. Primary endpoints were mortality during the active study period and overall survival during the longest available follow-up, irrespective of anticancer treatment, and in patients given chemotherapy. Tests for interactions were used to identify differences in effects of erythropoiesis-stimulating agents on mortality across prespecified subgroups. FINDINGS Data from a total of 13 933 patients with cancer in 53 trials were analysed. 1530 patients died during the active study period and 4993 overall. Erythropoiesis-stimulating agents increased mortality during the active study period (combined hazard ratio [cHR] 1.17, 95% CI 1.06-1.30) and worsened overall survival (1.06, 1.00-1.12), with little heterogeneity between trials (I(2) 0%, p=0.87 for mortality during the active study period, and I(2) 7.1%, p=0.33 for overall survival). 10 441 patients on chemotherapy were enrolled in 38 trials. The cHR for mortality during the active study period was 1.10 (0.98-1.24), and 1.04 (0.97-1.11) for overall survival. There was little evidence for a difference between trials of patients given different anticancer treatments (p for interaction=0.42). INTERPRETATION Treatment with erythropoiesis-stimulating agents in patients with cancer increased mortality during active study periods and worsened overall survival. The increased risk of death associated with treatment with these drugs should be balanced against their benefits. FUNDING German Federal Ministry of Education and Research, Medical Faculty of University of Cologne, and Oncosuisse (Switzerland).


The Lancet | 2014

Reducing waste from incomplete or unusable reports of biomedical research

Paul Glasziou; Douglas G. Altman; Patrick M. Bossuyt; Isabelle Boutron; Mike Clarke; Steven A. Julious; Susan Michie; David Moher; Elizabeth Wager

Research publication can both communicate and miscommunicate. Unless research is adequately reported, the time and resources invested in the conduct of research is wasted. Reporting guidelines such as CONSORT, STARD, PRISMA, and ARRIVE aim to improve the quality of research reports, but all are much less adopted and adhered to than they should be. Adequate reports of research should clearly describe which questions were addressed and why, what was done, what was shown, and what the findings mean. However, substantial failures occur in each of these elements. For example, studies of published trial reports showed that the poor description of interventions meant that 40-89% were non-replicable; comparisons of protocols with publications showed that most studies had at least one primary outcome changed, introduced, or omitted; and investigators of new trials rarely set their findings in the context of a systematic review, and cited a very small and biased selection of previous relevant trials. Although best documented in reports of controlled trials, inadequate reporting occurs in all types of studies-animal and other preclinical studies, diagnostic studies, epidemiological studies, clinical prediction research, surveys, and qualitative studies. In this report, and in the Series more generally, we point to a waste at all stages in medical research. Although a more nuanced understanding of the complex systems involved in the conduct, writing, and publication of research is desirable, some immediate action can be taken to improve the reporting of research. Evidence for some recommendations is clear: change the current system of research rewards and regulations to encourage better and more complete reporting, and fund the development and maintenance of infrastructure to support better reporting, linkage, and archiving of all elements of research. However, the high amount of waste also warrants future investment in the monitoring of and research into reporting of research, and active implementation of the findings to ensure that research reports better address the needs of the range of research users.


BMJ | 2001

Forest plots: trying to see the wood and the trees.

Steff Lewis; Mike Clarke

Few systematic reviews containing meta-analyses are complete without a forest plot. But what are forest plots, and where did they come from? #### Summary points Forest plots show the information from the individual studies that went into the meta-analysis at a glance They show the amount of variation between the studies and an estimate of the overall result Forest plots, in various forms, have been published for about 20 years During this time, they have been improved, but it is still not easy to draw them in most standard computer packages In a typical forest plot, the results of component studies are shown as squares centred on the point estimate of the result of each study. A horizontal line runs through the square to show its confidence interval—usually, but not always, a 95% confidence interval. The overall estimate from the meta-analysis and its confidence interval are put at the bottom, represented as a diamond. The centre of the diamond …


Clinical Trials | 2005

Meta-analysis of individual patient data from randomized trials: a review of methods used in practice

M.C. Simmonds; Julian P. T. Higgins; Lesley Stewart; J.F. Tierney; Mike Clarke; Simon G. Thompson

Background Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.


BMJ | 2005

Need for expertise based randomised controlled trials

P.J. Devereaux; Mohit Bhandari; Mike Clarke; Victor M. Montori; Deborah J. Cook; Salim Yusuf; David L. Sackett; Claudio S. Cinà; S.D. Walter; Brian Haynes; Holger J. Schünemann; Geoffrey R. Norman; Gordon H. Guyatt

Surgical procedures are less likely to be rigorously evidence based than drug treatments because of difficulties with randomisation. Expertise based trials could be the way forward Although conventional randomised controlled trials are widely recognised as the most reliable method to evaluate pharmacological interventions,1 2 scepticism about their role in nonpharmacological interventions (such as surgery) remains.3–6 Conventional randomised controlled trials typically randomise participants to one of two intervenions (A or B) and individual clinicians give intervention A to some participants and B to others. An alternative trial design, the expertise based randomised controlled trial, randomises participants to clinicians with expertise in intervention A or clinicians with expertise in intervention B, and the clinicians perform only the procedure they are expert in. We present evidence to support our argument that increased use of the expertise based design will enhance the validity, applicability, feasibility, and ethical integrity of randomised controlled trials in surgery, as well as in other areas. We focus on established surgical interventions rather than new surgical procedures in which clinicians have not established expertise. Investigators have used the expertise based design when conventional randomised controlled trials were impossible because different specialty groups provided the interventions under evaluation—for example, percutaneous transluminal coronary angioplasty versus coronary artery bypass graft surgery.7–9 In 1980, Van der Linden suggested randomising participants to clinicians committed to performing different interventions in an area in which a conventional randomised controlled trial was possible.10 Since that time, however, the expertise based design has been little used, even in areas where it has high potential (such as, surgery, physiotherapy, and chiropractic). ### Differential expertise between procedures Because it takes training and experience to develop expertise in surgical interventions, individual surgeons tend to solely or primarily use a single surgical approach to treat a specific problem.10 11 …

Collaboration


Dive into the Mike Clarke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Declan Devane

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew S Duncombe

University Hospital Southampton NHS Foundation Trust

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge