Dave Worthington
Lancaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dave Worthington.
Interfaces | 2010
Stephan Onggo; Michael Pidd; Didier Soopramanien; Dave Worthington
The European Commission (the Commission) employs more than 22,000 officials who provide administrative services to the European Union. In 2003, the Commission introduced a performance appraisal and promotion system based on points that the officials earn each year. In 2006, the Commission realized that the system needed to be revised. To support the review process, the Commission invited tenders for a project to develop simulation models that it could use to project the future performance of the existing system. A team from Lancaster University won the bid and subsequently worked closely with Commission officials to develop a new system. In 2009, the stakeholders in the Commissions performance appraisal and promotion system agreed to implement the improved system. The simulation model is unusual in the field of manpower planning because it models the consequences of appraisal-system rules. It uses novel, accurate, and efficient sampling techniques that are based on regression models of the underlying relationships in the data. The model was a crucial part of renegotiating the appraisal and promotion system and implementing a new system.
Journal of the Operational Research Society | 2004
Dave Worthington
In Koh’s recent paper, MRP planning and batchmanufacturing system control architectures were modelled using simulation. An experimental design was set up in which there were eight main effects, 28 two-way interactions and 56 three-way interactions. Results were analysed using SPSS, and effects that were significant at the 5% level were reported and elaborated upon. Four of the main effects, two of the two-way interactions and four of the three-way interactions were significant at the 5% level. One important aspect of this methodology not commented upon in the paper is that results significant at a 5% level are expected to occur purely due to chance on one in 20 occasions. Hence, in the 92 significance tests carried out in the paper, four or five should be expected to be significant at the 5% level for no other reason than chance. This has some implications for the conclusions drawn in the paper. Inspecting the results more carefully the four significant main effects are all significant at the 0.5% significance level, and hence provide strong evidence of real effects. Of the 28 two-way effects, one is significant at 0.1% level, but the second is only significant at the 5% level. Hence, while the evidence for the former is again strong, evidence for the latter is rather weak. Finally, all four of the significant three-way effects are only significant at the 5% level. Given that 56 5%1⁄4 2.8 ‘significant’ results ‘should’ occur simply due to chance, this is very weak evidence. In general, researchers need to be aware of this potential weakness in situations where many significance tests are performed, and should interpret their results accordingly. However, in cases such as this particular paper, where results have been produced using simulated experiments, there is another option. Repetition of the simulated experiments using another set of random numbers will either support the tentative initial findings or will demonstrate that they were just a chance occurrence.
Journal of the Operational Research Society | 1996
Dave Worthington
Generating functions simple queues birth-death models the M/G/1 system the embedded Markov process other models for single-server queues random arrivals, block service, constant interval between times at which service is available transient solutions networks of queues simulation models appendix.
Journal of the Operational Research Society | 2006
Dave Worthington
I have been prompted to mount my OR educator’s soapbox, if only briefly, by Dr Koh’s reply (Koh, 2004a) to my note (Worthington, 2004) on her paper (Koh, 2004b). First of all, I would like to reassure readers that, contrary to the implication of Dr Koh’s reply (Koh, 2004a), the laws of statistics (in particular, the law implied by the title of this viewpoint) do not cease to hold because the SIMAN random number generator (good as it may be) has been used to generate a set of experimental results. I know that I am not the only OR educator who warns their students of the ease and dangers of misusing advanced software packages, be they for simulation, statistical analysis, linear programming, etc. Koh’s original paper (2004b), its progress into JORS, and her response to my previous letter are all evidence that even academics with their reflective inclination and access to expert advice can be lulled into a false sense of security when using user-friendly packages. As my co-workers have been quick to point out to me, the statistical issue identified in my previous note was ‘not rocket science’. It goes under the very unexciting name of ‘type I errors’ and will be taught to my second year management students with renewed vigour when we look at hypothesis testing later this term. When testing a hypothesis at a (say) 5% significance level,
Journal of the Operational Research Society | 1987
Dave Worthington
Journal of the Operational Research Society | 1991
M. Brahimi; Dave Worthington
Health Care Management Science | 2009
Adrian Fletcher; Dave Worthington
Journal of the Operational Research Society | 1999
Dave Worthington; Alan Wall
Journal of the Operational Research Society | 1994
A. D. Wall; Dave Worthington
Journal of health care finance | 2010
James R. Langabeer; Dave Worthington