Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel R Shanahan is active.

Publication


Featured researches published by Daniel R Shanahan.


Trials | 2015

Making randomised trials more efficient: report of the first meeting to discuss the Trial Forge platform

Shaun Treweek; Douglas G. Altman; Peter Bower; Marion K Campbell; Iain Chalmers; Seonaidh Cotton; Peter Craig; David Crosby; Peter Davidson; Declan Devane; Lelia Duley; Janet A. Dunn; Diana Elbourne; Barbara Farrell; Carrol Gamble; Katie Gillies; Kerry Hood; Trudie Lang; Roberta Littleford; Kirsty Loudon; Alison McDonald; Gladys McPherson; Annmarie Nelson; John Norrie; Craig Ramsay; Peter Sandercock; Daniel R Shanahan; William Summerskill; Matthew R. Sydes; Paula Williamson

Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants’ views on the processes in the life of a randomised trial that should be covered by Trial Forge.General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a ‘go to’ website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.


Trials | 2014

Linked publications from a single trial: a thread of evidence

Douglas G. Altman; Curt D. Furberg; Jeremy Grimshaw; Daniel R Shanahan

The medical literature is vast and it is impossible to keep up with the deluge of new research articles. With ongoing concerns regarding manipulation of the outcomes and analyses reported in medical research, it is increasingly important that researchers are able to easily identify and access all publications relating to a specific clinical trial, in order to get the complete picture and to reliably evaluate bias or selective reporting. Recent developments and innovations within the threaded publications initiative and the Linked Reports of Clinical Trials project demonstrate progress towards the ideal of making all trial information readily available.


BMJ Open | 2017

Sharing and reuse of individual participant data from clinical trials: principles and recommendations

Christian Ohmann; Rita Banzi; Steve Canham; Serena Battaglia; Mihaela Matei; Christopher Ariyo; Lauren B. Becnel; Barbara E. Bierer; Sarion Bowers; Luca Clivio; Monica Dias; Christiane Druml; Hélène Faure; Martin Fenner; Jose Galvez; Davina Ghersi; Christian Gluud; Trish Groves; Paul Houston; Ghassan Karam; Dipak Kalra; Rachel L Knowles; Karmela Krleža-Jerić; Christine Kubiak; Wolfgang Kuchinke; Rebecca Kush; Ari Lukkarinen; Pedro Silverio Marques; Andrew Newbigging; Jennifer O’Callaghan

Objectives We examined major issues associated with sharing of individual clinical trial data and developed a consensus document on providing access to individual participant data from clinical trials, using a broad interdisciplinary approach. Design and methods This was a consensus-building process among the members of a multistakeholder task force, involving a wide range of experts (researchers, patient representatives, methodologists, information technology experts, and representatives from funders, infrastructures and standards development organisations). An independent facilitator supported the process using the nominal group technique. The consensus was reached in a series of three workshops held over 1 year, supported by exchange of documents and teleconferences within focused subgroups when needed. This work was set within the Horizon 2020-funded project CORBEL (Coordinated Research Infrastructures Building Enduring Life-science Services) and coordinated by the European Clinical Research Infrastructure Network. Thus, the focus was on non-commercial trials and the perspective mainly European. Outcome We developed principles and practical recommendations on how to share data from clinical trials. Results The task force reached consensus on 10 principles and 50 recommendations, representing the fundamental requirements of any framework used for the sharing of clinical trials data. The document covers the following main areas: making data sharing a reality (eg, cultural change, academic incentives, funding), consent for data sharing, protection of trial participants (eg, de-identification), data standards, rights, types and management of access (eg, data request and access models), data management and repositories, discoverability, and metadata. Conclusions The adoption of the recommendations in this document would help to promote and support data sharing and reuse among researchers, adequately inform trial participants and protect their rights, and provide effective and efficient systems for preparing, storing and accessing data. The recommendations now need to be implemented and tested in practice. Further work needs to be done to integrate these proposals with those from other geographical areas and other academic domains.


PeerJ | 2016

Auto-correlation of journal impact factor for consensus research reporting statements: a cohort study

Daniel R Shanahan

Background. The Journal Citation Reports journal impact factors (JIFs) are widely used to rank and evaluate journals, standing as a proxy for the relative importance of a journal within its field. However, numerous criticisms have been made of use of a JIF to evaluate importance. This problem is exacerbated when the use of JIFs is extended to evaluate not only the journals, but the papers therein. The purpose of this study was therefore to investigate the relationship between the number of citations and journal IF for identical articles published simultaneously in multiple journals. Methods. Eligible articles were consensus research reporting statements listed on the EQUATOR Network website that were published simultaneously in three or more journals. The correlation between the citation count for each article and the median journal JIF over the published period, and between the citation count and number of article accesses was calculated for each reporting statement. Results. Nine research reporting statements were included in this analysis, representing 85 articles published across 58 journals in biomedicine. The number of citations was strongly correlated to the JIF for six of the nine reporting guidelines, with moderate correlation shown for the remaining three guidelines (median r = 0.66, 95% CI [0.45–0.90]). There was also a strong positive correlation between the number of citations and the number of article accesses (median r = 0.71, 95% CI [0.5–0.8]), although the number of data points for this analysis were limited. When adjusted for the individual reporting guidelines, each logarithm unit of JIF predicted a median increase of 0.8 logarithm units of citation counts (95% CI [−0.4–5.2]), and each logarithm unit of article accesses predicted a median increase of 0.1 logarithm units of citation counts (95% CI [−0.9–1.4]). This model explained 26% of the variance in citations (median adjusted r2 = 0.26, range 0.18–1.0). Conclusion. The impact factor of the journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.


Trials | 2015

A living document: reincarnating the research article

Daniel R Shanahan

The limitations of the traditional research paper are well known and widely discussed; however, rather than seeking solutions to the problems created by this model of publication, it is time to do away with a print era anachronism and design a new model of publication, with modern technology embedded at its heart. Instead of the current system with multiple publications, across multiple journals, publication could move towards a single, evolving document that begins with trial registration and then extends to include the full protocol and results as they become available, underpinned by the raw clinical data and all code used to obtain the result. This model would lead to research being evaluated prospectively, based on its hypothesis and methodology as stated in the study protocol, and move away from considering serendipitous results to be synonymous with quality, while also presenting readers with the opportunity to reliably evaluate bias or selective reporting in the published literature.


Journal of Negative Results in Biomedicine | 2014

Opening peer-review: the democracy of science.

Daniel R Shanahan; Björn Olsen

Scientific journals have been called the ‘minutes of science’ [1]. Born out of the exchange of letters on scientific topics and results, publication is a way of documenting what was done and, particularly in the case of open-access journals, sharing the outcome. Journal publications are considered authoritative and are generally used to inform the work of others, be it further research or, in the case of biomedicine, in treatment decisions for patients. This makes some kind of quality control all the more important. The ‘gold standard’ for this quality control is peer review. Described as a form of self-regulation by qualified members of a profession, it was first introduced by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society, in 1665 as a means of vetting contributions to the Royal Society of London and has persisted in various forms ever since [2]. Peer review is the evaluation of a piece of work by two or more people of similar competence to the authors; but, assuming the reviewing process excludes all those involved in the direct research itself, the persons most qualified to judge the validity of a submitted research paper are precisely those who are the scientist’s closest competitors. This means that the review process can become adversarial, with referees seeming to see it as their responsibility to insist on time-consuming additions and revisions [3,4]. Moreover, under traditional, closed peer-review policies, the identity of the reviewer is withheld from the author, presenting them with a greater opportunity to act arbitrarily. It was in an effort to combat this bias that some journals introduced double-blind peer-review, whereby the author’s name was also concealed from the referee. However, research is a small world and maintaining that blinding often proved impossible. So how did peer review, with these intrinsic issues and biases, become the judicial system of the intellectual world? Simply put, peer review is to science what democracy was to Churchill – ‘the worst form of government, except all those other forms that have been tried from time to time.’ It has served science well, with a widely-held view that, while it may not be perfect, it is nonetheless far better than anything else we have been able to devise. Indeed, the fundamental idea of peer-review seems sound; the issues lie more with the execution. Under closed systems, such as that currently enforced by the Journal of Negative Results in BioMedicine, there is a lack of transparency of the peer-review process and a lack of availability of evaluative information about published articles to the public. Therefore, as of February 2014, the Journal of Negative Results in BioMedicine will adopt an open peer-review policy. Articles already published, or those manuscripts currently submitted, will not be affected by this change. However, for all manuscripts submitted during or after February 2014, authors will see the reviewers’ names and, if the article is published, the reading public will also see who reviewed the article and how the authors responded. This will be available as part of the pre-publication history of the published article. The peer review process will therefore be completely open and transparent, with the peer reviews being part of the record. Research into the effect of open peer review suggests numerous benefits, in particular accountability, fairness and crediting reviewers for their efforts [5-7]. Furthermore, in a recent study, Kowalczuk et al. revealed that reviewer reports operating under an open peer-review system were of overall higher quality than those under a closed system, with higher scores on questions relating to feedback on the methods (11% higher), constructiveness of the comments (5% higher), and the amount of evidence provided to substantiate the comments (9% higher) [8]. Despite this, we recognise that there are also negatives. Some (junior) reviewers may feel uncomfortable signing a critical report, especially when recommending rejection [9]. This reluctance also means that more potential referees may need to be invited to review a manuscript openly than under a closed peer-review system (Parkin EC et al. unpublished observations) [9-11]. Reviewing an article is no easy task and many of us will have faced the situation where it feels we have put more thought into our review of the article than the authors did in designing the study and writing the manuscript. The move towards an open peer-review policy will give credit where it is due, but moreover will provide valuable information to those reading the article, sharing the referees’ critique of the manuscript and presenting all the necessary information for them to make an objective evaluation for themselves.


BMJ Open | 2017

A protocol of a cross-sectional study evaluating an online tool for early career peer reviewers assessing reports of randomised controlled trials

Anthony Chauvin; David Moher; Doug Altman; David L. Schriger; Sabina Alam; Sally Hopewell; Daniel R Shanahan; Alessandro Recchioni; Philippe Ravaud; Isabelle Boutron

Introduction Systematic reviews evaluating the impact of interventions to improve the quality of peer review for biomedical publications highlighted that interventions were limited and have little impact. This study aims to compare the accuracy of early career peer reviewers who use an innovative online tool to the usual peer reviewer process in evaluating the completeness of reporting and switched primary outcomes in completed reports. Methods and analysis This is a cross-sectional study of individual two-arm parallel-group randomised controlled trials (RCTs) published in the BioMed Central series medical journals, BMJ, BMJ Open and Annals of Emergency Medicine and indexed with the publication type ‘Randomised Controlled Trial’. First, we will develop an online tool and training module based (a) on the Consolidated Standards of Reporting Trials (CONSORT) 2010 checklist and the Explanation and Elaboration document that would be dedicated to junior peer reviewers for assessing the completeness of reporting of key items and (b) the Centre for Evidence-Based Medicine Outcome Monitoring Project process used to identify switched outcomes in completed reports of the primary results of RCTs when initially submitted. Then, we will compare the performance of early career peer reviewers who use the online tool to the usual peer review process in identifying inadequate reporting and switched outcomes in completed reports of RCTs at initial journal submission. The primary outcome will be the mean number of items accurately classified per manuscript. The secondary outcomes will be the mean number of items accurately classified per manuscript for the CONSORT items and the sensitivity, specificity and likelihood ratio to detect the item as adequately reported and to identify a switch in outcomes. We aim to include 120 RCTs and 120 early career peer reviewers. Ethics and dissemination The research protocol was approved by the ethics committee of the INSERM Institutional Review Board (21 January 2016). The study is based on voluntary participation and informed written consent. Trial registration number NCT03119376.


BMJ | 2017

Should research ethics committees police reporting bias

Simon Kolstoe; Daniel R Shanahan; Janet Wisely

Ethics review bodies are well placed to check trial reporting, say Simon E Kolstoe and Daniel R Shanahan, but Janet Wisely worries about resourcing and the lack of sanctions available once approval has been granted


BMC Neuroscience | 2015

Better reporting for better research: a checklist for reproducibility

Amye Kenall; Scott C Edmunds; Laurie Goodman; Liz Bal; Louisa Flintoft; Daniel R Shanahan; Tim Shipley


Research Integrity and Peer Review | 2017

Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before–after study

Daniel R Shanahan; Ines Lopes de Sousa; Diana M Marshall

Collaboration


Dive into the Daniel R Shanahan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Crosby

Medical Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge