Michael K. Bane
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael K. Bane.
parallel computing | 1991
T. L. Freeman; Michael K. Bane
Frequent synchronisations have a significant effect on the efficiency of parallel numerical algorithms. In this paper we consider simultaneous polynomial zero-finding algorithms and analyse, both theoretically and numerically, the effect of removing the synchronisation restriction from these algorithms.
european conference on parallel processing | 2002
Michael K. Bane; Graham D. Riley
In this paper we extend current models of overhead analysis to include complex OpenMP structures, leading to clearer and more appropriate definitions.
Journal of Computational Chemistry | 2016
Nicodemo Di Pasquale; Michael K. Bane; Stuart J. Davie; Paul L. A. Popelier
FFLUX is a novel force field based on quantum topological atoms, combining multipolar electrostatics with IQA intraatomic and interatomic energy terms. The program FEREBUS calculates the hyperparameters of models produced by the machine learning method kriging. Calculation of kriging hyperparameters (θ and p) requires the optimization of the concentrated log‐likelihood L̂(θ,p) . FEREBUS uses Particle Swarm Optimization (PSO) and Differential Evolution (DE) algorithms to find the maximum of L̂(θ,p) . PSO and DE are two heuristic algorithms that each use a set of particles or vectors to explore the space in which L̂(θ,p) is defined, searching for the maximum. The log‐likelihood is a computationally expensive function, which needs to be calculated several times during each optimization iteration. The cost scales quickly with the problem dimension and speed becomes critical in model generation. We present the strategy used to parallelize FEREBUS, and the optimization of L̂(θ,p) through PSO and DE. The code is parallelized in two ways. MPI parallelization distributes the particles or vectors among the different processes, whereas the OpenMP implementation takes care of the calculation of L̂(θ,p) , which involves the calculation and inversion of a particular matrix, whose size increases quickly with the dimension of the problem. The run time shows a speed‐up of 61 times going from single core to 90 cores with a saving, in one case, of ∼98% of the single core time. In fact, the parallelization scheme presented reduces computational time from 2871 s for a single core calculation, to 41 s for 90 cores calculation.
Environmental Modelling and Software | 2008
Rachel Warren; S. de la Nava Santos; Nigel W. Arnell; Michael K. Bane; Terry Barker; C. Barton; Rupert W. Ford; Hans-Martin Füssel; Robin K. S. Hankin; Rupert Klein; C. Linstead; Jonathan Köhler; T. D. Mitchell; Timothy J. Osborn; H. Pan; S. C. B. Raper; Graham D. Riley; Hans Joachim Schellnhuber; Sarah Winne; D. Anderson
Concurrency and Computation: Practice and Experience | 2006
Rupert W. Ford; Graham D. Riley; Michael K. Bane; Christopher W. Armstrong; T. L. Freeman
Geoscientific Model Development | 2016
David Topping; Mark H. Barley; Michael K. Bane; Nicholas J. Higham; B. Aumont; Nicholas J. Dingle; Gordon McFiggans
international workshop on openmp | 2000
Michael K. Bane; Graham D. Riley
Environmental Modelling and Software | 2008
Rachel Warren; S. de la Nava Santos; Nigel W. Arnell; Michael K. Bane; Terry Barker; C. Barton; Rupert W. Ford; Hans-Martin Füssel; Robin K. S. Hankin; Jochen Hinkel; Rupert Klein; C. Linstead; Jonathan Köhler; T. D. Mitchell; Timothy J. Osborn; H. Pan; S. C. B. Raper; Graham D. Riley; Hans Joachim Schellnhuber; Sarah Winne; D. Anderson
european conference on parallel processing | 2002
Michael K. Bane; Graham D. Riley
Archive | 2000
Michael K. Bane; Rainer Keller; Michael Pettipher; Manchester Computing; Ian Smith