Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Day is active.

Publication


Featured researches published by Simon Day.


Biometrical Journal | 2015

Sharing clinical trial data on patient level: Opportunities and challenges

Franz Koenig; Jim Slattery; Trish Groves; Thomas Lang; Yoav Benjamini; Simon Day; Peter Bauer; Martin Posch

In recent months one of the most controversially discussed topics among regulatory agencies, the pharmaceutical industry, journal editors, and academia has been the sharing of patient-level clinical trial data. Several projects have been started such as the European Medicines Agency´s (EMA) “proactive publication of clinical trial data”, the BMJ open data campaign, or the AllTrials initiative. The executive director of the EMA, Dr. Guido Rasi, has recently announced that clinical trial data on patient level will be published from 2014 onwards (although it has since been delayed). The EMA draft policy on proactive access to clinical trial data was published at the end of June 2013 and open for public consultation until the end of September 2013. These initiatives will change the landscape of drug development and publication of medical research. They provide unprecedented opportunities for research and research synthesis, but pose new challenges for regulatory authorities, sponsors, scientific journals, and the public. Besides these general aspects, data sharing also entails intricate biostatistical questions such as problems of multiplicity. An important issue in this respect is the interpretation of multiple statistical analyses, both prospective and retrospective. Expertise in biostatistics is needed to assess the interpretation of such multiple analyses, for example, in the context of regulatory decision-making by optimizing procedural guidance and sophisticated analysis methods.


Pharmaceutical Statistics | 2011

Proposed best practice for statisticians in the reporting and publication of pharmaceutical industry‐sponsored clinical trials

James Matcham; Steven A. Julious; Stephen Pyke; Michael O'Kelly; Susan Todd; Jorgen Seldrup; Simon Day

In this paper we set out what we consider to be a set of best practices for statisticians in the reporting of pharmaceutical industry-sponsored clinical trials. We make eight recommendations covering: author responsibilities and recognition; publication timing; conflicts of interest; freedom to act; full author access to data; trial registration and independent review. These recommendations are made in the context of the prominent role played by statisticians in the design, conduct, analysis and reporting of pharmaceutical sponsored trials and the perception of the reporting of these trials in the wider community.


Pharmaceutical Statistics | 2011

The potential for bias in reporting of industry‐sponsored clinical trials

Stephen Pyke; Steven A. Julious; Simon Day; Michael O'Kelly; Susan Todd; James Matcham; Jorgen Seldrup

Concerns about potentially misleading reporting of pharmaceutical industry research have surfaced many times. The potential for duality (and thereby conflict) of interest is only too clear when you consider the sums of money required for the discovery, development and commercialization of new medicines. As the ability of major, mid-size and small pharmaceutical companies to innovate has waned, as evidenced by the seemingly relentless decline in the numbers of new medicines approved by Food and Drug Administration and European Medicines Agency year-on-year, not only has the cost per new approved medicine risen: so too has the public and media concern about the extent to which the pharmaceutical industry is open and honest about the efficacy, safety and quality of the drugs we manufacture and sell. In 2005 an Editorial in Journal of the American Medical Association made clear that, so great was their concern about misleading reporting of industry-sponsored studies, henceforth no article would be published that was not also guaranteed by independent statistical analysis. We examine the precursors to this Editorial, as well as its immediate and lasting effects for statisticians, for the manner in which statistical analysis is carried out, and for the industry more generally.


JAMA | 2017

Guidelines for the Content of Statistical Analysis Plans in Clinical Trials

Carrol Gamble; Ashma Krishan; Deborah D. Stocken; Steff Lewis; Edmund Juszczak; Caroline J Doré; Paula Williamson; Douglas G. Altman; Alan A Montgomery; Pilar Lim; Jesse A. Berlin; Stephen Senn; Simon Day; Yolanda Barbachano; Elizabeth Loder

Importance While guidance on statistical principles for clinical trials exists, there is an absence of guidance covering the required content of statistical analysis plans (SAPs) to support transparency and reproducibility. Objective To develop recommendations for a minimum set of items that should be addressed in SAPs for clinical trials, developed with input from statisticians, previous guideline authors, journal editors, regulators, and funders. Design Funders and regulators (n = 39) of randomized trials were contacted and the literature was searched to identify existing guidance; a survey of current practice was conducted across the network of UK Clinical Research Collaboration–registered trial units (n = 46, 1 unit had 2 responders) and a Delphi survey (n = 73 invited participants) was conducted to establish consensus on SAPs. The Delphi survey was sent to statisticians in trial units who completed the survey of current practice (n = 46), CONSORT (Consolidated Standards of Reporting Trials) and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guideline authors (n = 16), pharmaceutical industry statisticians (n = 3), journal editors (n = 9), and regulators (n = 2) (3 participants were included in 2 groups each), culminating in a consensus meeting attended by experts (N = 12) with representatives from each group. The guidance subsequently underwent critical review by statisticians from the surveyed trial units and members of the expert panel of the consensus meeting (N = 51), followed by piloting of the guidance document in the SAPs of 5 trials. Findings No existing guidance was identified. The registered trials unit survey (46 responses) highlighted diversity in current practice and confirmed support for developing guidance. The Delphi survey (54 of 73, 74% participants completing both rounds) reached consensus on 42% (n = 46) of 110 items. The expert panel (N = 12) agreed that 63 items should be included in the guidance, with an additional 17 items identified as important but may be referenced elsewhere. Following critical review and piloting, some overlapping items were combined, leaving 55 items. Conclusions and Relevance Recommendations are provided for a minimum set of items that should be addressed and included in SAPs for clinical trials. Trial registration, protocols, and statistical analysis plans are critically important in ensuring appropriate reporting of clinical trials.


Biometrical Journal | 2017

Determination of the optimal sample size for a clinical trial accounting for the population size

Nigel Stallard; Frank Miller; Simon Day; Siew Wan Hee; Jason Madan; Sarah Zohar; Martin Posch

The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision‐theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two‐arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes.


Statistical Methods in Medical Research | 2016

Decision-theoretic designs for small trials and pilot studies : A review

Siew Wan Hee; Thomas Hamborg; Simon Day; Jason Madan; Frank Miller; Martin Posch; Sarah Zohar; Nigel Stallard

Pilot studies and other small clinical trials are often conducted but serve a variety of purposes and there is little consensus on their design. One paradigm that has been suggested for the design of such studies is Bayesian decision theory. In this article, we review the literature with the aim of summarizing current methodological developments in this area. We find that decision-theoretic methods have been applied to the design of small clinical trials in a number of areas. We divide our discussion of published methods into those for trials conducted in a single stage, those for multi-stage trials in which decisions are made through the course of the trial at a number of interim analyses, and those that attempt to design a series of clinical trials or a drug development programme. In all three cases, a number of methods have been proposed, depending on the decision maker’s perspective being considered and the details of utility functions that are used to construct the optimal design.


Pharmaceutical Statistics | 2011

Making available information from studies sponsored by the pharmaceutical industry: some current practices.

Michael O'Kelly; Steven A. Julious; Stephen Pyke; Simon Day; Susan Todd; Jorgen Seldrup; James Matcham

Since the web-based registry ClinicalTrials.gov was launched on 29 February 2000, the pharmaceutical industry has made available an increasing amount of information about the clinical trials that it sponsors. The process has been spurred on by a number of factors including a wish by the industry to provide greater transparency regarding clinical trial data; and has been both aided and complicated by the number of institutions that have a legitimate interest in guiding and defining what should be made available. This article reviews the history of this process of making information about clinical trials publicly available. It provides a readers guide to the study registries and the databases of results; and looks at some indicators of consistency in the posting of study information.


Orphanet Journal of Rare Diseases | 2017

Does the low prevalence affect the sample size of interventional clinical trials of rare diseases? An analysis of data from the aggregate analysis of clinicaltrials.gov

Siew Wan Hee; Adrian Willis; Catrin Tudur Smith; Simon Day; Frank Miller; Jason Madan; Martin Posch; Sarah Zohar; Nigel Stallard

BackgroundClinical trials are typically designed using the classical frequentist framework to constrain type I and II error rates. Sample sizes required in such designs typically range from hundreds to thousands of patients which can be challenging for rare diseases. It has been shown that rare disease trials have smaller sample sizes than non-rare disease trials. Indeed some orphan drugs were approved by the European Medicines Agency based on studies with as few as 12 patients. However, some studies supporting marketing authorisation included several hundred patients. In this work, we explore the relationship between disease prevalence and other factors and the size of interventional phase 2 and 3 rare disease trials conducted in the US and/or EU. We downloaded all clinical trials from Aggregate Analysis of ClinialTrials.gov (AACT) and identified rare disease trials by cross-referencing MeSH terms in AACT with the list from Orphadata. We examined the effects of prevalence and phase of study in a multiple linear regression model adjusting for other statistically significant trial characteristics.ResultsOf 186941 ClinicalTrials.gov trials only 1567 (0.8%) studied a single rare condition with prevalence information from Orphadata. There were 19 (1.2%) trials studying disease with prevalence <1/1,000,000, 126 (8.0%) trials with 1–9/1,000,000, 791 (50.5%) trials with 1–9/100,000 and 631 (40.3%) trials with 1–5/10,000. Of the 1567 trials, 1160 (74%) were phase 2 trials. The fitted mean sample size for the rarest disease (prevalence <1/1,000,000) in phase 2 trials was the lowest (mean, 15.7; 95% CI, 8.7–28.1) but were similar across all the other prevalence classes; mean, 26.2 (16.1–42.6), 33.8 (22.1–51.7) and 35.6 (23.3–54.3) for prevalence 1–9/1,000,000, 1–9/100,000 and 1–5/10,000, respectively. Fitted mean size of phase 3 trials of rarer diseases, <1/1,000,000 (19.2, 6.9–53.2) and 1–9/1,000,000 (33.1, 18.6–58.9), were similar to those in phase 2 but were statistically significant lower than the slightly less rare diseases, 1–9/100,000 (75.3, 48.2–117.6) and 1-5/10,000 (77.7, 49.6–121.8), trials.ConclusionsWe found that prevalence was associated with the size of phase 3 trials with trials of rarer diseases noticeably smaller than the less rare diseases trials where phase 3 rarer disease (prevalence <1/100,000) trials were more similar in size to those for phase 2 but were larger than those for phase 2 in the less rare disease (prevalence ≥1/100,000) trials.


British Journal of Clinical Pharmacology | 2012

Symmetrical analysis of risk–benefit

John B. Warren; Simon Day; Peter Feldschreiber

To quantify the value of a medical therapy the benefits are weighed against the risks. Effectiveness is defined by objective evidence from predefined endpoints. This benefit is offset against the disadvantage of adverse events. The safety assessment is usually a subjective summary of concerns that can often be neither confirmed nor dismissed. But sometimes a clinical database is so large that a parameter common to both efficacy and safety can be quantified with reasonable certainty: myocardial infarction (MI) is used here as an example. Recently the Food and Drug Administration (FDA) proposed set limits for the incidence of MI as a safety threshold for diabetes treatment. Setting a threshold before something is considered as a safety concern opens the possibility of setting a threshold for clinically important efficacy. When a parameter is common to both safety and efficacy, then logically a unit change in either direction should be of equal weight in the risk and benefit analysis. For example, a doubling in the incidence of myocardial infarction as a safety signal should be given equal weight to the halving of the incidence of myocardial infarction as an efficacy signal. Similarly, if FDA guidance suggests that a less than a 30% increase in the incidence of MI as a safety parameter is considered acceptable, for example for diabetes treatment, when there is no other major toxicity, this opens a debate about a possible inverse threshold for clinical benefit for drugs that reduce a risk factor, such as antihypertensives.


BMC Medical Research Methodology | 2018

Value of information methods to design a clinical trial in a small population to optimise a health economic utility function

Michael Pearce; Siew Wan Hee; Jason Madan; Martin Posch; Simon Day; Frank Miller; Sarah Zohar; Nigel Stallard

BackgroundMost confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population.MethodsWe considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future.ResultsThe optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored.ConclusionsDecision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

Collaboration


Dive into the Simon Day's collaboration.

Top Co-Authors

Avatar

Martin Posch

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge