Emmanuel O. Ogundimu
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emmanuel O. Ogundimu.
Statistics in Medicine | 2016
Gary S. Collins; Emmanuel O. Ogundimu; Douglas G. Altman
After developing a prognostic model, it is essential to evaluate the performance of the model in samples independent from those used to develop the model, which is often referred to as external validation. However, despite its importance, very little is known about the sample size requirements for conducting an external validation. Using a large real data set and resampling methods, we investigate the impact of sample size on the performance of six published prognostic models. Focussing on unbiased and precise estimation of performance measures (e.g. the c‐index, D statistic and calibration), we provide guidance on sample size for investigators designing an external validation study. Our study suggests that externally validating a prognostic model requires a minimum of 100 events and ideally 200 (or more) events.
Journal of Clinical Epidemiology | 2016
Emmanuel O. Ogundimu; Douglas G. Altman; Gary S. Collins
Objectives The choice of an adequate sample size for a Cox regression analysis is generally based on the rule of thumb derived from simulation studies of a minimum of 10 events per variable (EPV). One simulation study suggested scenarios in which the 10 EPV rule can be relaxed. The effect of a range of binary predictors with varying prevalence, reflecting clinical practice, has not yet been fully investigated. Study Design and Setting We conducted an extended resampling study using a large general-practice data set, comprising over 2 million anonymized patient records, to examine the EPV requirements for prediction models with low-prevalence binary predictors developed using Cox regression. The performance of the models was then evaluated using an independent external validation data set. We investigated both fully specified models and models derived using variable selection. Results Our results indicated that an EPV rule of thumb should be data driven and that EPV ≥ 20 generally eliminates bias in regression coefficients when many low-prevalence predictors are included in a Cox model. Conclusion Higher EPV is needed when low-prevalence predictors are present in a model to eliminate bias in regression coefficients and improve predictive accuracy.
Statistics in Medicine | 2016
Gary S. Collins; Emmanuel O. Ogundimu; Jonathan Cook; Yannick Le Manach; Douglas G. Altman
Continuous predictors are routinely encountered when developing a prognostic model. Investigators, who are often non‐statisticians, must decide how to handle continuous predictors in their models. Categorising continuous measurements into two or more categories has been widely discredited, yet is still frequently done because of its simplicity, investigator ignorance of the potential impact and of suitable alternatives, or to facilitate model uptake. We examine three broad approaches for handling continuous predictors on the performance of a prognostic model, including various methods of categorising predictors, modelling a linear relationship between the predictor and outcome and modelling a nonlinear relationship using fractional polynomials or restricted cubic splines. We compare the performance (measured by the c‐index, calibration and net benefit) of prognostic models built using each approach, evaluating them using separate data from that used to build them. We show that categorising continuous predictors produces models with poor predictive performance and poor clinical usefulness. Categorising continuous predictors is unnecessary, biologically implausible and inefficient and should not be used in prognostic model development.
Brazilian Journal of Probability and Statistics | 2016
F. J. Rubio; Emmanuel O. Ogundimu; Jane L. Hutton
We introduce the univariate two-piece sinh-arcsinh distribution, which contains two shape parameters that separately control skewness and kurtosis. We show that this new model can capture higher levels of asymmetry than the origi- nal sinh-arcsinh distribution (Jones and Pewsey, 2009), in terms of some asym- m etry measures, while keeping the flexibility on the tails and tractability. We present an example using real data to illustrate the performance of the proposed model and compare it against appropriate competitors. Although we focus on the study of the univariate versions of the proposed distributions, we point out some multivariate extensions.
Pharmaceutical Statistics | 2016
Mouna Akacha; Emmanuel O. Ogundimu
Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time-in-study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference-based imputations, where information from reference arms can be borrowed to impute post-discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time-varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer.
Communications in Statistics-theory and Methods | 2016
Emmanuel O. Ogundimu; Jane L. Hutton
Abstract We propose a unified approach for multilevel sample selection models using a generalized result on skew distributions arising from selection. If the underlying distributional assumption is normal, then the resulting density for the outcome is the continuous component of the sample selection density and has links with the closed skew-normal distribution (CSN). The CSN distribution provides a framework which simplifies the derivation of the conditional expectation of the observed data. This generalizes the Heckman’s two-step method to a multilevel sample selection model. Finite-sample performance of the maximum likelihood estimator of this model is studied through a Monte Carlo simulation.
The Spine Journal | 2015
Gary S. Collins; Emmanuel O. Ogundimu; Y Le Manach
Evaluating a prediction model using a separate dataset from which the model was developed is a crucial step in assessing its predictive performance, often referred to as external validation. The recent study by Tetrault and colleagues modified their previous prediction model by omitting one of the predictors and then re-fitting the model on the original development data from 12 sites from North America. The modified prediction model was subsequently evaluated on a larger international cohort from the AOSpine CSM-I trial. Whilst it is encouraging to see authors carrying out such external validation studies, there are concerns in the analysis which need highlighting.
Scandinavian Journal of Statistics | 2016
Emmanuel O. Ogundimu; Jane L. Hutton
Programme Grants for Applied Research | 2017
N K Arden; Doug Altman; D J Beard; Andrew Carr; Nicholas Clarke; Gary S. Collins; C Cooper; David Culliford; Antonella Delmestri; Stefanie Garden; Tinatin Griffin; Kassim Javaid; Andrew Judge; Jeremy Latham; Mark Mullee; David W. Murray; Emmanuel O. Ogundimu; Rafael Pinedo-Villanueva; A Price; Daniel Prieto-Alhambra; James Raftery
Statistics & Probability Letters | 2015
Emmanuel O. Ogundimu; Jane L. Hutton