Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jamie Kirkham is active.

Publication


Featured researches published by Jamie Kirkham.


BMJ | 2010

The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews

Jamie Kirkham; Kerry Dwan; Douglas G. Altman; Carrol Gamble; Susanna Dodd; Rebecca Smyth; Paula Williamson

Objective To examine the prevalence of outcome reporting bias—the selection for publication of a subset of the original recorded outcome variables on the basis of the results—and its impact on Cochrane reviews. Design A nine point classification system for missing outcome data in randomised trials was developed and applied to the trials assessed in a large, unselected cohort of Cochrane systematic reviews. Researchers who conducted the trials were contacted and the reason sought for the non-reporting of data. A sensitivity analysis was undertaken to assess the impact of outcome reporting bias on reviews that included a single meta-analysis of the review primary outcome. Results More than half (157/283 (55%)) the reviews did not include full data for the review primary outcome of interest from all eligible trials. The median amount of review outcome data missing for any reason was 10%, whereas 50% or more of the potential data were missing in 70 (25%) reviews. It was clear from the publications for 155 (6%) of the 2486 assessable trials that the researchers had measured and analysed the review primary outcome but did not report or only partially reported the results. For reports that did not mention the review primary outcome, our classification regarding the presence of outcome reporting bias was shown to have a sensitivity of 88% (95% CI 65% to 100%) and specificity of 80% (95% CI 69% to 90%) on the basis of responses from 62 trialists. A third of Cochrane reviews (96/283 (34%)) contained at least one trial with high suspicion of outcome reporting bias for the review primary outcome. In a sensitivity analysis undertaken for 81 reviews with a single meta-analysis of the primary outcome of interest, the treatment effect estimate was reduced by 20% or more in 19 (23%). Of the 42 meta-analyses with a statistically significant result only, eight (19%) became non-significant after adjustment for outcome reporting bias and 11 (26%) would have overestimated the treatment effect by 20% or more. Conclusions Outcome reporting bias is an under-recognised problem that affects the conclusions in a substantial proportion of Cochrane reviews. Individuals conducting systematic reviews need to address explicitly the issue of missing outcome data for their review to be considered a reliable source of evidence. Extra care is required during data extraction, reviewers should identify when a trial reports that an outcome was measured but no results were reported or events observed, and contact with trialists should be encouraged.


BMJ | 2016

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

Jonathan A C Sterne; Miguel A. Hernán; Barnaby C Reeves; Jelena Savovic; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G. Altman; Mohammed T Ansari; Isabelle Boutron; James Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K. Loke; Theresa D Pigott; Craig Ramsay; Deborah Regidor; Hannah R. Rothstein; Lakhbir Sandhu; Pasqualina Santaguida; Holger J. Schunemann; B. Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C. Valentine

Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.


BMJ | 2011

Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists

Rebecca Smyth; Jamie Kirkham; Ann Jacoby; Douglas G. Altman; Carrol Gamble; Paula Williamson

Objectives To provide information on the frequency and reasons for outcome reporting bias in clinical trials. Design Trial protocols were compared with subsequent publication(s) to identify any discrepancies in the outcomes reported, and telephone interviews were conducted with the respective trialists to investigate more extensively the reporting of the research and the issue of unreported outcomes. Participants Chief investigators, or lead or coauthors of trials, were identified from two sources: trials published since 2002 covered in Cochrane systematic reviews where at least one trial analysed was suspected of being at risk of outcome reporting bias (issue 4, 2006; issue 1, 2007, and issue 2, 2007 of the Cochrane library); and a random sample of trial reports indexed on PubMed between August 2007 and July 2008. Setting Australia, Canada, Germany, the Netherlands, New Zealand, the United Kingdom, and the United States. Main outcome measures Frequency of incomplete outcome reporting—signified by outcomes that were specified in a trial’s protocol but not fully reported in subsequent publications—and trialists’ reasons for incomplete reporting of outcomes. Results 268 trials were identified for inclusion (183 from the cohort of Cochrane systematic reviews and 85 from PubMed). Initially, 161 respective investigators responded to our requests for interview, 130 (81%) of whom agreed to be interviewed. However, failure to achieve subsequent contact, obtain a copy of the study protocol, or both meant that final interviews were conducted with 59 (37%) of the 161 trialists. Sixteen trial investigators failed to report analysed outcomes at the time of the primary publication, 17 trialists collected outcome data that were subsequently not analysed, and five trialists did not measure a prespecified outcome over the course of the trial. In almost all trials in which prespecified outcomes had been analysed but not reported (15/16, 94%), this under-reporting resulted in bias. In nearly a quarter of trials in which prespecified outcomes had been measured but not analysed (4/17, 24%), the “direction” of the main findings influenced the investigators’ decision not to analyse the remaining data collected. In 14 (67%) of the 21 randomly selected PubMed trials, there was at least one unreported efficacy or harm outcome. More than a quarter (6/21, 29%) of these trials were found to have displayed outcome reporting bias. Conclusion The prevalence of incomplete outcome reporting is high. Trialists seemed generally unaware of the implications for the evidence base of not reporting all outcomes and protocol changes. A general lack of consensus regarding the choice of outcomes in particular clinical settings was evident and affects trial design, conduct, analysis, and reporting.


PLOS ONE | 2012

Adverse drug reactions in children--a systematic review.

Rebecca Smyth; Elizabeth Gargon; Jamie Kirkham; Lynne Cresswell; Su Golder; Rosalind L. Smyth; Paula Williamson

Background Adverse drug reactions in children are an important public health problem. We have undertaken a systematic review of observational studies in children in three settings: causing admission to hospital, occurring during hospital stay and occurring in the community. We were particularly interested in understanding how ADRs might be better detected, assessed and avoided. Methods and Findings We searched nineteen electronic databases using a comprehensive search strategy. In total, 102 studies were included. The primary outcome was any clinical event described as an adverse drug reaction to one or more drugs. Additional information relating to the ADR was collected: associated drug classification; clinical presentation; associated risk factors; methods used for assessing causality, severity, and avoidability. Seventy one percent (72/102) of studies assessed causality, and thirty four percent (34/102) performed a severity assessment. Only nineteen studies (19%) assessed avoidability. Incidence rates for ADRs causing hospital admission ranged from 0.4% to 10.3% of all children (pooled estimate of 2.9% (2.6%, 3.1%)) and from 0.6% to 16.8% of all children exposed to a drug during hospital stay. Anti-infectives and anti-epileptics were the most frequently reported therapeutic class associated with ADRs in children admitted to hospital (17 studies; 12 studies respectively) and children in hospital (24 studies; 14 studies respectively), while anti-infectives and non-steroidal anti-inflammatory drugs (NSAIDs) were frequently reported as associated with ADRs in outpatient children (13 studies; 6 studies respectively). Fourteen studies reported rates ranging from 7%–98% of ADRs being either definitely/possibly avoidable. Conclusions There is extensive literature which investigates ADRs in children. Although these studies provide estimates of incidence in different settings and some indication of the therapeutic classes most frequently associated with ADRs, further work is needed to address how such ADRs may be prevented.


BMJ | 2014

Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews

Pooja Saini; Yoon K. Loke; Carrol Gamble; Douglas G. Altman; Paula Williamson; Jamie Kirkham

Objective To determine the extent and nature of selective non-reporting of harm outcomes in clinical studies that were eligible for inclusion in a cohort of systematic reviews. Design Cohort study of systematic reviews from two databases. Setting Outcome reporting bias in trials for harm outcomes (ORBIT II) in systematic reviews from the Cochrane Library and a separate cohort of systematic reviews of adverse events. Participants 92 systematic reviews of randomised controlled trials and non-randomised studies published in the Cochrane Library between issue 9, 2012 and issue 2, 2013 (Cochrane cohort) and 230 systematic reviews published between 1 January 2007 and 31 December 2011 in other publications, synthesising data on harm outcomes (adverse event cohort). Methods A 13 point classification system for missing outcome data on harm was developed and applied to the studies. Results 86% (79/92) of reviews in the Cochrane cohort did not include full data from the main harm outcome of interest of each review for all of the eligible studies included within that review; 76% (173/230) for the adverse event cohort. Overall, the single primary harm outcome was inadequately reported in 76% (705/931) of the studies included in the 92 reviews from the Cochrane cohort and not reported in 47% (4159/8837) of the 230 reviews in the adverse event cohort. In a sample of primary studies not reporting on the single primary harm outcome in the review, scrutiny of the study publication revealed that outcome reporting bias was suspected in nearly two thirds (63%, 248/393). Conclusions The number of reviews suspected of outcome reporting bias as a result of missing or partially reported harm related outcomes from at least one eligible study is high. The declaration of important harms and the quality of the reporting of harm outcomes must be improved in both primary studies and systematic reviews.


Trials | 2013

Can a core outcome set improve the quality of systematic reviews? – a survey of the Co-ordinating Editors of Cochrane review groups

Jamie Kirkham; Elizabeth Gargon; Mike Clarke; Paula Williamson

BackgroundMissing outcome data or the inconsistent reporting of outcome data in clinical research can affect the quality of evidence within a systematic review. A potential solution is an agreed standardized set of outcomes known as a core outcome set (COS) to be measured in all studies for a specific condition. We investigated the amount of missing patient data for primary outcomes in Cochrane systematic reviews, and surveyed the Co-ordinating Editors of Cochrane Review Groups (CRGs) on issues related to the standardization of outcomes in their CRG’s reviews. These groups are responsible for the more than 7,000 protocols and full versions of Cochrane Reviews that are currently available, and the several hundred new reviews published each year, presenting the world’s largest collection of standardized systematic reviews in health care.MethodsUsing an unselected cohort of Cochrane Reviews, we calculated and presented the percentage of missing patient data for the primary outcome measure chosen for each review published by each CRG. We also surveyed the CRG Co-ordinating Editors to see what their policies are with regards to outcome selection and outcomes to include in the Summary of Finding (SoF) tables in their Cochrane Reviews. They were also asked to list the main advantages and challenges of standardizing outcomes across all reviews within their CRG.ResultsIn one fifth of the 283 reviews in the sample, more than 50% of the patient data for the primary outcome was missing. Responses to the survey were received from 90% of Co-ordinating Editors. Thirty-six percent of CRGs have a centralized policy regarding which outcomes to include in the SoF table and 73% of Co-ordinating Editors thought that a COS for effectiveness trials should be used routinely for a SoF table.ConclusionsThe reliability of systematic reviews, in particular meta-analyses they contain, can be improved if more attention is paid to missing outcome data. The availability of COSs for specific health conditions might help with this and the concept has support from the majority of Co-ordinating Editors in CRGs.


Trials | 2017

The COMET Handbook: Version 1.0

Paula Williamson; Douglas G. Altman; Heather Bagley; Karen L. Barnes; Jane M Blazeby; Sara Brookes; Mike Clarke; Elizabeth Gargon; Sarah Gorst; Nicola Harman; Jamie Kirkham; Angus McNair; Cecilia A.C. Prinsen; Jochen Schmitt; Caroline B. Terwee; Bridget Young

The selection of appropriate outcomes is crucial when designing clinical trials in order to compare the effects of different interventions directly. For the findings to influence policy and practice, the outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. It is now widely acknowledged that insufficient attention has been paid to the choice of outcomes measured in clinical trials. Researchers are increasingly addressing this issue through the development and use of a core outcome set, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area.Accumulating work in this area has identified the need for guidance on the development, implementation, evaluation and updating of core outcome sets. This Handbook, developed by the COMET Initiative, brings together current thinking and methodological research regarding those issues. We recommend a four-step process to develop a core outcome set. The aim is to update the contents of the Handbook as further research is identified.


Trials | 2013

Outcome measures in rheumatoid arthritis randomised trials over the last 50 years

Jamie Kirkham; Maarten Boers; Peter Tugwell; Mike Clarke; Paula Williamson

BackgroundThe development and application of standardised sets of outcomes to be measured and reported in clinical trials have the potential to increase the efficiency and value of research. One of the most notable of the current outcome sets began nearly 20 years ago: the World Health Organization and International League of Associations for Rheumatology core set of outcomes for rheumatoid arthritis clinical trials, originating from the OMERACT (Outcome Measures in Rheumatology) Initiative. This study assesses the use of this core outcome set by randomised trials in rheumatology.MethodsAn observational review was carried out of 350 randomised trials for the treatment of rheumatoid arthritis identified through The Cochrane Library (up to and including September 2012 issue). Reports of these trials were evaluated to determine whether or not there were trends in the proportion of trials reporting on the full set of core outcomes over time. Researchers who conducted trials after the publication of the core set were contacted to assess their awareness of it and to collect reasons for non-inclusion of the full core set of outcomes in the study.ResultsSince the introduction of the core set of outcomes for rheumatoid arthritis, the consistency of measurement of the core set of outcomes has improved, although variation in the choice of measurement instrument remains. The majority of trialists who responded said that they would consider using the core outcome set in the design of a new trial.ConclusionsThis observational review suggests that a higher percentage of trialists conducting trials in rheumatoid arthritis are now measuring the rheumatoid arthritis core outcome set. Core outcome sets have the potential to improve the evidence base for health care, but consideration must be given to the methods for disseminating their availability amongst the relevant communities.


PLOS ONE | 2010

Bias Due to Changes in Specified Outcomes during the Systematic Review Process

Jamie Kirkham; Douglas G. Altman; Paula Williamson

Background Adding, omitting or changing outcomes after a systematic review protocol is published can result in bias because it increases the potential for unacknowledged or post hoc revisions of the planned analyses. The main objective of this study was to look for discrepancies between primary outcomes listed in protocols and in the subsequent completed reviews published on the Cochrane Library. A secondary objective was to quantify the risk of bias in a set of meta-analyses where discrepancies between outcome specifications in protocols and reviews were found. Methods and Findings New reviews from three consecutive issues of the Cochrane Library were assessed. For each review, the primary outcome(s) listed in the review protocol and the review itself were identified and review authors were contacted to provide reasons for any discrepancies. Over a fifth (64/288, 22%) of protocol/review pairings were found to contain a discrepancy in at least one outcome measure, of which 48 (75%) were attributable to changes in the primary outcome measure. Where lead authors could recall a reason for the discrepancy in the primary outcome, there was found to be potential bias in nearly a third (8/28, 29%) of these reviews, with changes being made after knowledge of the results from individual trials. Only 4(6%) of the 64 reviews with an outcome discrepancy described the reason for the change in the review, with no acknowledgment of the change in any of the eight reviews containing potentially biased discrepancies. Outcomes that were promoted in the review were more likely to be significant than if there was no discrepancy (relative risk 1.66 95% CI (1.10, 2.49), p = 0.02). Conclusion In a review, making changes after seeing the results for included studies can lead to biased and misleading interpretation if the importance of the outcome (primary or secondary) is changed on the basis of those results. Our assessment showed that reasons for discrepancies with the protocol are not reported in the review, demonstrating an under-recognition of the problem. Complete transparency in the reporting of changes in outcome specification is vital; systematic reviewers should ensure that any legitimate changes to outcome specification are reported with reason in the review.


PLOS ONE | 2011

Development and inter-rater reliability of the Liverpool adverse drug reaction causality assessment tool.

Ruairi M Gallagher; Jamie Kirkham; Jennifer R. Mason; Kim A Bird; Paula Williamson; Anthony J Nunn; Mark A. Turner; Rosalind L. Smyth; Munir Pirmohamed

Aim To develop and test a new adverse drug reaction (ADR) causality assessment tool (CAT). Methods A comparison between seven assessors of a new CAT, formulated by an expert focus group, compared with the Naranjo CAT in 80 cases from a prospective observational study and 37 published ADR case reports (819 causality assessments in total). Main Outcome Measures Utilisation of causality categories, measure of disagreements, inter-rater reliability (IRR). Results The Liverpool ADR CAT, using 40 cases from an observational study, showed causality categories of 1 unlikely, 62 possible, 92 probable and 125 definite (1, 62, 92, 125) and ‘moderate’ IRR (kappa 0.48), compared to Naranjo (0, 100, 172, 8) with ‘moderate’ IRR (kappa 0.45). In a further 40 cases, the Liverpool tool (0, 66, 81, 133) showed ‘good’ IRR (kappa 0.6) while Naranjo (1, 90, 185, 4) remained ‘moderate’. Conclusion The Liverpool tool assigns the full range of causality categories and shows good IRR. Further assessment by different investigators in different settings is needed to fully assess the utility of this tool.

Collaboration


Dive into the Jamie Kirkham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerry Dwan

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Peak

University of Central Lancashire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iain Bruce

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Peter Callery

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge