Malcolm R. Macleod
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Malcolm R. Macleod.
Annals of Neurology | 2006
Malcolm R. Macleod; Geoffrey A. Donnan; Laura L. Horky; Bart H van der Worp; David W. Howells
Preclinical evaluation of neuroprotectants fostered high expectations of clinical efficacy. When not matched, the question arises whether experiments are poor indicators of clinical outcome or whether the best drugs were not taken forward to clinical trial. Therefore, we endeavored to contrast experimental efficacy and scope of testing of drugs used clinically and those tested only experimentally.
Nature | 2012
Story C. Landis; Susan G. Amara; Khusru Asadullah; Christopher P. Austin; Robi Blumenstein; Eileen W. Bradley; Ronald G. Crystal; Robert B. Darnell; Robert J. Ferrante; Howard Fillit; Robert Finkelstein; Marc Fisher; Howard E. Gendelman; Robert M. Golub; John L. Goudreau; Robert A. Gross; Amelie K. Gubitz; Sharon E. Hesterlee; David W. Howells; John R. Huguenard; Katrina Kelner; Walter J. Koroshetz; Dimitri Krainc; Stanley E. Lazic; Michael S. Levine; Malcolm R. Macleod; John M. McCall; Richard T. Moxley; Kalyani Narasimhan; L.J. Noble
The US National Institute of Neurological Disorders and Stroke convened major stakeholders in June 2012 to discuss how to improve the methodological reporting of animal studies in grant applications and publications. The main workshop recommendation is that at a minimum studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data. We recognize that achieving a meaningful improvement in the quality of reporting will require a concerted effort by investigators, reviewers, funding agencies and journal editors. Requiring better reporting of animal studies will raise awareness of the importance of rigorous study design to accelerate scientific progress.
PLOS Medicine | 2010
H. Bart van der Worp; David W. Howells; Emily S. Sena; Michelle J Porritt; Sarah S J Rewell; Malcolm R. Macleod
H. Bart van der Worp and colleagues discuss the controversies and possibilities of translating the results of animal experiments into human clinical trials.
The Lancet | 2014
John P. A. Ioannidis; Sander Greenland; Mark A. Hlatky; Muin J. Khoury; Malcolm R. Macleod; David Moher; Kenneth F. Schulz; Robert Tibshirani
Correctable weaknesses in the design, conduct, and analysis of biomedical and public health research studies can produce misleading results and waste valuable resources. Small effects can be difficult to distinguish from bias introduced by study design and analyses. An absence of detailed written protocols and poor documentation of research is common. Information obtained might not be useful or important, and statistical precision or power is often too low or used in a misleading way. Insufficient consideration might be given to both previous and continuing studies. Arbitrary choice of analyses and an overemphasis on random extremes might affect the reported findings. Several problems relate to the research workforce, including failure to involve experienced statisticians and methodologists, failure to train clinical researchers and laboratory scientists in research methods and design, and the involvement of stakeholders with conflicts of interest. Inadequate emphasis is placed on recording of research decisions and on reproducibility of research. Finally, reward systems incentivise quantity more than quality, and novelty more than reliability. We propose potential solutions for these problems, including improvements in protocols and documentation, consideration of evidence from studies in progress, standardisation of research efforts, optimisation and training of an experienced and non-conflicted scientific workforce, and reconsideration of scientific reward systems.
BMJ | 2007
Pablo Perel; Ian Roberts; Emily S. Sena; Philipa Wheble; Catherine Briscoe; Peter Sandercock; Malcolm R. Macleod; Luciano Mignini; Pradeep Jayaram; Khalid S. Khan
Objective To examine concordance between treatment effects in animal experiments and clinical trials. Study design Systematic review. Data sources Medline, Embase, SIGLE, NTIS, Science Citation Index, CAB, BIOSIS. Study selection Animal studies for interventions with unambiguous evidence of a treatment effect (benefit or harm) in clinical trials: head injury, antifibrinolytics in haemorrhage, thrombolysis in acute ischaemic stroke, tirilazad in acute ischaemic stroke, antenatal corticosteroids to prevent neonatal respiratory distress syndrome, and bisphosphonates to treat osteoporosis. Review methods Data were extracted on study design, allocation concealment, number of randomised animals, type of model, intervention, and outcome. Results Corticosteroids did not show any benefit in clinical trials of treatment for head injury but did show a benefit in animal models (pooled odds ratio for adverse functional outcome 0.58, 95% confidence interval 0.41 to 0.83). Antifibrinolytics reduced bleeding in clinical trials but the data were inconclusive in animal models. Thrombolysis improved outcome in patients with ischaemic stroke. In animal models, tissue plasminogen activator reduced infarct volume by 24% (95% confidence interval 20% to 28%) and improved neurobehavioural scores by 23% (17% to 29%). Tirilazad was associated with a worse outcome in patients with ischaemic stroke. In animal models, tirilazad reduced infarct volume by 29% (21% to 37%) and improved neurobehavioural scores by 48% (29% to 67%). Antenatal corticosteroids reduced respiratory distress and mortality in neonates whereas in animal models respiratory distress was reduced but the effect on mortality was inconclusive (odds ratio 4.2, 95% confidence interval 0.85 to 20.9). Bisphosphonates increased bone mineral density in patients with osteoporosis. In animal models the bisphosphonate alendronate increased bone mineral density compared with placebo by 11.0% (95% confidence interval 9.2% to 12.9%) in the combined results for the hip region. The corresponding treatment effect in the lumbar spine was 8.5% (5.8% to 11.2%) and in the combined results for the forearms (baboons only) was 1.7% (−1.4% to 4.7%). Conclusions Discordance between animal and human studies may be due to bias or to the failure of animal models to mimic clinical disease adequately.
Archive | 2014
John P A Ioannidis; Sander Greenland; Mark A. Hlatky; Muin J. Khoury; Malcolm R. Macleod; David Moher; Kenneth F. Schulz; Robert Tibshirani
Correctable weaknesses in the design, conduct, and analysis of biomedical and public health research studies can produce misleading results and waste valuable resources. Small effects can be difficult to distinguish from bias introduced by study design and analyses. An absence of detailed written protocols and poor documentation of research is common. Information obtained might not be useful or important, and statistical precision or power is often too low or used in a misleading way. Insufficient consideration might be given to both previous and continuing studies. Arbitrary choice of analyses and an overemphasis on random extremes might affect the reported findings. Several problems relate to the research workforce, including failure to involve experienced statisticians and methodologists, failure to train clinical researchers and laboratory scientists in research methods and design, and the involvement of stakeholders with conflicts of interest. Inadequate emphasis is placed on recording of research decisions and on reproducibility of research. Finally, reward systems incentivise quantity more than quality, and novelty more than reliability. We propose potential solutions for these problems, including improvements in protocols and documentation, consideration of evidence from studies in progress, standardisation of research efforts, optimisation and training of an experienced and non-conflicted scientific workforce, and reconsideration of scientific reward systems.
PLOS Biology | 2010
Emily S. Sena; H. Bart van der Worp; Philip M.W. Bath; David W. Howells; Malcolm R. Macleod
Publication bias confounds attempts to use systematic reviews to assess the efficacy of various interventions tested in experiments modelling acute ischaemic stroke, leading to a 30% overstatement of efficacy of interventions tested in animals.
Stroke | 2004
Malcolm R. Macleod; Tori O’Collins; David W. Howells; Geoffrey A. Donnan
Background and Purpose— The extensive neuroprotective literature describing the efficacy of candidate drugs in focal ischemia has yet to lead to the development of effective stroke treatments. Ideally, the choice of drugs taken forward to clinical trial should be based on an unbiased assessment of all available data. Such an assessment might include not only the efficacy of a drug but also the in vivo characteristics and limits—in terms of time window, dose, species, and model of ischemia used—to that efficacy. To our knowledge, such assessments have not been made. Nicotinamide is a candidate neuroprotective drug with efficacy in experimental stroke, but the limits to and characteristics of that efficacy have not been fully described. Methods— Systematic review and modified meta-analysis of studies of experimental stroke describing the efficacy of nicotinamide. The search strategy ensured ascertainment of studies published in full and those published in abstract only. DerSimonian and Laird random effects meta-analysis was used to account for heterogeneity between studies. Results— Nicotinamide improved outcome by 0.287 (95% confidence interval 0.227 to 0.347); it was more effective in temporary ischemia models, after intravenous administration, in animals without comorbidities, and in studies published in full rather than in abstract. Studies scoring highly on a quality measure gave more precise estimates of the global effect. Conclusions— Meta-analysis provides an effective technique for the aggregation of data from experimental stroke studies. We propose new standards for reporting such studies and a systematic approach to aggregating data from the neuroprotective literature.
Stroke | 2009
Malcolm R. Macleod; Marc Fisher; Emily S. Sena; Ulrich Dirnagl; Philip M.W. Bath; Alistair Buchan; H. Bart van der Worp; Richard J. Traystman; Kazuo Minematsu; Geoffrey A. Donnan; David W. Howells
Background and Purpose— As a research community, we have failed to demonstrate that drugs which show substantial efficacy in animal models of cerebral ischemia can also improve outcome in human stroke. Summary of Review— Accumulating evidence suggests this may be due, at least in part, to problems in the design, conduct and reporting of animal experiments which create a systematic bias resulting in the overstatement of neuroprotective efficacy. Conclusions— Here, we set out a series of measures to reduce bias in the design, conduct and reporting of animal experiments modeling human stroke.
Trends in Neurosciences | 2007
Emily S. Sena; H. Bart van der Worp; David W. Howells; Malcolm R. Macleod
The development of stroke drugs has been characterized by success in animal studies and subsequent failure in clinical trials. Animal studies might have overstated efficacy, or clinical trials might have understated efficacy; in either case we need to better understand the reasons for failure. Techniques borrowed from clinical trials have recently allowed the impact of publication and study-quality biases on published estimates of efficacy in animal experiments to be described. On the basis of these data, we propose minimum standards for the range and quality of pre-clinical animal data. We believe the adoption of these standards will lead to improved effectiveness and efficiency in the selection of drugs for clinical trials in stroke and in the design of those trials.