Ian J. Bush
Daresbury Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ian J. Bush.
Journal of Materials Chemistry | 2006
Stanko Tomić; Andrew G. Sunderland; Ian J. Bush
We present a parallel implementation of the multi-bank k·p code () for calculation of the electronic structure and optical properties of zinc blend structure semiconductor quantum dots. The electronic wave-functions are expanded in a plane wave basis set in a similar way to ab initio calculations. This approach allows one to express the strain tensor components, the piezoelectric field and the arbitrary shape of the embedded quantum dot in the form of coefficients in the Fourier transform, significantly simplifying the implementation. Most of the strain elements can be given in an analytical form, while very complicated quantum dot shapes can be modelled as a linear combination of the Fourier transform of several characteristic shapes: box, cylinder, cone etc. We show that the parallel implementation of the code scales very well up to 512 processors, giving us the memory and processor power to either include more bands, as in the dilute nitrogen quantum dot structures, or to perform calculations on bigger quantum dots/supercells structures keeping the same “cut-off” energy. The program performance is demonstrated on the pyramidal shape InAs/GaAs, dilute nitrogen InGaAsN, and recently emerged volcano-like InAs/GaAs quantum dot systems.
Philosophical Transactions of the Royal Society A | 2005
Joachim Hein; Fiona Reid; Lorna Smith; Ian J. Bush; Martyn F. Guest; Paul Sherwood
The effective exploitation of current high performance computing (HPC) platforms in molecular simulation relies on the ability of the present generation of parallel molecular dynamics code to make effective utilisation of these platforms and their components, including CPUs and memory. In this paper, we investigate the efficiency and scaling of a series of popular molecular dynamics codes on the UKs national HPC resources, an IBM p690+ cluster and an SGI Altix 3700. Focusing primarily on the Amber, Dl_Poly and Namd simulation codes, we demonstrate the major performance and scalability advantages that arise through a distributed, rather than a replicated data approach.
Journal of Chemical Theory and Computation | 2017
Alessandro Erba; Jacopo Baima; Ian J. Bush; Roberto Orlando; Roberto Dovesi
Nowadays, the efficient exploitation of high-performance computing resources is crucial to extend the applicability of first-principles theoretical methods to the description of large, progressively more realistic molecular and condensed matter systems. This can be achieved only by devising effective parallelization strategies for the most time-consuming steps of a calculation, which requires some effort given the usual complexity of quantum-mechanical algorithms, particularly so if parallelization is to be extended to all properties and not just to the basic functionalities of the code. In this Article, the performance and capabilities of the massively parallel version of the Crystal17 package for first-principles calculations on solids are discussed. In particular, we present: (i) recent developments allowing for a further improvement of the code scalability (up to 32 768 cores); (ii) a quantitative analysis of the scaling and memory requirements of the code when running calculations with several thousands (up to about 14 000) of atoms per cell; (iii) a documentation of the high numerical size consistency of the code; and (iv) an overview of recent ab initio studies of several physical properties (structural, energetic, electronic, vibrational, spectroscopic, thermodynamic, elastic, piezoelectric, topological) of large systems investigated with the code.
Concurrency and Computation: Practice and Experience | 2005
Mike Ashworth; Ian J. Bush; Martyn F. Guest; Andrew G. Sunderland; Stephen Booth; Joachim Hein; Lorna Smith; Kevin Stratford; Alessandro Curioni
We introduce HPCx—the U.K.s new National HPC Service—which aims to deliver a world‐class service for capability computing to the U.K. scientific community. HPCx is targeting an environment that will both result in world‐leading science and address the challenges involved in scaling existing codes to the capability levels required. Close working relationships with scientific consortia and user groups throughout the research process will be a central feature of the service. A significant number of key user applications have already been ported to the system. We present initial benchmark results from this process and discuss the optimization of the codes and the performance levels achieved on HPCx in comparison with other systems. We find a range of performance with some algorithms scaling far better than others. Copyright
Journal of Materials Chemistry | 2006
Martin Plummer; Joachim Hein; Martyn F. Guest; K. J. D'Mellow; Ian J. Bush; Keith Refson; G. J. Pringle; Lorna Smith; Arthur Trew
We describe the HPCx UoE Ltd national computing resource HPCx Phase 2 as used in 2004 and 2005. We describe the work of the HPCx ‘terascaling team’ and how this work in collaboration with scientists and code developers allows for efficient exploitation of large-scale computational resources to produce new science as described in the rest of this volume. We emphasize the need for scientists and code developers to have an understanding of the peculiarities of the national and international facilities they use to generate their data. We give some examples of successful application code optimization in materials chemistry on HPCx. We introduce HPCx Phase 2A which entered service in November 2005.
Archive | 2009
Ilian T. Todorovm; Ian J. Bush; Andrew Porter
The molecular dynamics (MD) method is the only tool to provide detailed information on the time evolution of a molecular system on an atomistic scale. Although novel numerical algorithms and data reorganization approches can speed up the numerical calculations, the actual science of a simulation is contained in the captured frames of the system’s state and simulation data during the evolution. Therefore, an important bottleneck in the scalability and efficiency of any MD software is the I/O speed and reliabilty as data has to be dumped and stored for postmortem analysis. This becomes increasingly more important when simulations scale to many thousands of processors and system sizes increase to many millions of particles. This study outlines the problems associated with I/O when performing large classic MD runs and shows that it is necessary to use parallel I/O methods when studying large systems.
Journal of Materials Chemistry | 2006
Judy To; Paul Sherwood; Alexey A. Sokol; Ian J. Bush; C. Richard A. Catlow; Huub J. J. van Dam; Samuel A. French; Martyn F. Guest
Daresbury Laboratory Technical Reports | 2006
Richard Wain; Ian J. Bush; Martyn F. Guest; Miles Deegan; Igor N. Kozin; Christine Kitchen
Journal of Physics: Conference Series | 2008
Mauro Ferrero; Michel Rérat; Roberto Orlando; Roberto Dovesi; Ian J. Bush
Archive | 2006
Alan Gray; Mike Ashworth; Stephen Booth; Ian J. Bush; Martyn F. Guest; Joachim Hein; David Henty; Martin Plummer; Fiona Reid; Andrew G. Sunderland; Arthur Trew