Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew G. Sunderland is active.

Publication


Featured researches published by Andrew G. Sunderland.


Journal of Materials Chemistry | 2006

Parallel multi-band k·p code for electronic structure of zinc blend semiconductor quantum dots

Stanko Tomić; Andrew G. Sunderland; Ian J. Bush

We present a parallel implementation of the multi-bank k·p code () for calculation of the electronic structure and optical properties of zinc blend structure semiconductor quantum dots. The electronic wave-functions are expanded in a plane wave basis set in a similar way to ab initio calculations. This approach allows one to express the strain tensor components, the piezoelectric field and the arbitrary shape of the embedded quantum dot in the form of coefficients in the Fourier transform, significantly simplifying the implementation. Most of the strain elements can be given in an analytical form, while very complicated quantum dot shapes can be modelled as a linear combination of the Fourier transform of several characteristic shapes: box, cylinder, cone etc. We show that the parallel implementation of the code scales very well up to 512 processors, giving us the memory and processor power to either include more bands, as in the dilute nitrogen quantum dot structures, or to perform calculations on bigger quantum dots/supercells structures keeping the same “cut-off” energy. The program performance is demonstrated on the pyramidal shape InAs/GaAs, dilute nitrogen InGaAsN, and recently emerged volcano-like InAs/GaAs quantum dot systems.


Concurrency and Computation: Practice and Experience | 2005

HPCx: towards capability computing

Mike Ashworth; Ian J. Bush; Martyn F. Guest; Andrew G. Sunderland; Stephen Booth; Joachim Hein; Lorna Smith; Kevin Stratford; Alessandro Curioni

We introduce HPCx—the U.K.s new National HPC Service—which aims to deliver a world‐class service for capability computing to the U.K. scientific community. HPCx is targeting an environment that will both result in world‐leading science and address the challenges involved in scaling existing codes to the capability levels required. Close working relationships with scientific consortia and user groups throughout the research process will be a central feature of the service. A significant number of key user applications have already been ported to the system. We present initial benchmark results from this process and discuss the optimization of the codes and the performance levels achieved on HPCx in comparison with other systems. We find a range of performance with some algorithms scaling far better than others. Copyright


Archive | 2010

Towards Petascale Computing with Parallel CFD codes

Andrew G. Sunderland; M. Ashworth; Charles Moulinec; N. Li; Juan Uribe; Yvan Fournier

Many world leading high-end computing (HEC) facilities are now offering over 100 Teraflops/s of performance and several initiatives have begun to look forward to Petascale computing5 (1015 flop/s). Los Alamos National Laboratory and Oak Ridge National Laboratory (ORNL) already have Petascale systems, which are leading the current (Nov 2008) TOP500 list [1]. Computing at the Petascale raises a number of significant challenges for parallel computational fluid dynamics codes. Most significantly, further improvements to the performance of individual processors will be limited and therefore Petascale systems are likely to contain 100,000+ processors. Thus a critical aspect for utilising high Terascale and Petascale resources is the scalability of the underlying numerical methods, both with execution time with the number of processors and scaling of time with problem size. In this paper we analyse the performance of several CFD codes for a range of datasets on some of the latest high performance computing architectures. This includes Direct Numerical Simulations (DNS) via the SBLI [2] and SENGA2 [3] codes, and Large Eddy Simulations (LES) using both STREAMS LES [4] and the general purpose open source CFD code Code Saturne [5].


Computer Physics Communications | 1998

Parallelization of R-matrix propagation methods on distributed memory computers

Andrew G. Sunderland; J.W. Heggarty; C.J. Noble; N.S. Scott

Abstract The R -matrix and Logarithmic Derivative methods are numerically very stable and are therefore ideal for integrating the large sets of coupled second-order linear differential equations which arise in non-exchange scattering problems (e.g., electron scattering by atoms and molecules). These calculations, which typically are repeated at many scattering energies, can become computationally demanding requiring the use of massively parallel computers. Here the results of a study of various parallel decompositions of typical R -matrix propagator methods are reported. A data decomposition approach is employed in the solution following Baluja—Burke—Morgan method whereas a hybrid approach, involving both control and domain decomposition, is adopted for the potential following Light Walker method. Timings of test computations obtained using a Cray T3D computer demonstrate that R -matrix external region computations involving between 500 and 1500 scattering channels are feasible. The approach is easily extended to much larger calculations and to other computer architectures.


Archive | 2009

Parallel Diagonalization Performance on High-Performance Computers

Andrew G. Sunderland

Eigenvalue and eigenvector computations arise in a wide range of scientific and engineering applications. For example, in quantum chemistry and atomic physics, the computation of eigenvalues is often required to obtain electronic energy states. For large-scale complex systems in such areas, the eigensolver calculation usually represents a huge computational challenge. It is therefore imperative that suitable, highly efficient eigensolver methods are used in order to facilitate the solution of the most demanding scientific problems. This presentation will analyze the performance of parallel eigensolvers from numerical libraries such as ScaLAPACK on the latest parallel architectures using data sets derived from large-scale scientific applications.


Concurrency and Computation: Practice and Experience | 2013

Multiple threads and parallel challenges for large simulations to accelerate a general Navier–Stokes CFD code on massively parallel systems

Yvan Fournier; Jérôme Bonelle; Pascal Vezolle; Jerry Heyman; Bruce D. D'Amora; Karen A. Magerlein; John Harold Magerlein; Gordon W. Braudaway; Charles Moulinec; Andrew G. Sunderland

Computational fluid dynamics is an increasingly important application domain for computational scientists. In this paper, we propose and analyze optimizations necessary to run CFD simulations consisting of multibillion‐cell mesh models on large processor systems. Our investigation leverages the general industrial Navier–Stokes CFD application, Code_Saturne, developed by Electricité de France for incompressible and nearly compressible flows. In this paper, we outline the main bottlenecks and challenges for massively parallel systems and emerging processor features such as many‐core, transactional memory, and thread level speculation. We also present an approach based on an octree search algorithm to facilitate the joining of mesh parts and to build complex larger unstructured meshes of several billion grid cells. We describe two parallel strategies of an algebraic multigrid solver and we detail how to introduce new levels of parallelism based on compiler directives with OpenMP, transactional memory and thread level speculation, for finite volume cell‐centered formulation and face‐based loops. A renumbering scheme for mesh faces is proposed to enhance thread‐level parallelism. Copyright


Computer Physics Communications | 2017

TIMEDELn: A programme for the detection and parametrization of overlapping resonances using the time-delay method

Duncan A Little; Jonathan Tennyson; Martin Plummer; C.J. Noble; Andrew G. Sunderland

Abstract TIMEDELn implements the time-delay method of determining resonance parameters from the characteristic Lorentzian form displayed by the largest eigenvalues of the time-delay matrix. TIMEDELn constructs the time-delay matrix from input K-matrices and analyses its eigenvalues. This new version implements multi-resonance fitting and may be run serially or as a high performance parallel code with three levels of parallelism. TIMEDELn takes K-matrices from a scattering calculation, either read from a file or calculated on a dynamically adjusted grid, and calculates the time-delay matrix. This is then diagonalized, with the largest eigenvalue representing the longest time-delay experienced by the scattering particle. A resonance shows up as a characteristic Lorentzian form in the time-delay: the programme searches the time-delay eigenvalues for maxima and traces resonances when they pass through different eigenvalues, separating overlapping resonances. It also performs the fitting of the calculated data to the Lorentzian form and outputs resonance positions and widths. Any remaining overlapping resonances can be fitted jointly. The branching ratios of decay into the open channels can also be found. The programme may be run serially or in parallel with three levels of parallelism. The parallel code modules are abstracted from the main physics code and can be used independently. New version programme summary Programme Title: TIMEDELn Programme Files doi: http://dx.doi.org/10.17632/wmv4f42xnz.1 Licencing provisions: MIT Programming language: FORTRAN Journal reference of previous version: Computer Phys. Comms. , 114 , 236–242 (1998). Does the new version supersede the previous version?: Yes Nature of problem: TIMEDELn detects and parametrizes resonances, including overlapping resonances when provided with the K-matrix of the scattering problem. Solution method: Resonances are identified by peaks in the largest few eigenvalues of the time-delay matrix. Reasons for the new version: TIMEDELn includes a new procedure for fitting multiple overlapping resonances. It has also been parallelized to allow studies of complex systems (atoms and molecules) and generation of bulk data. Summary of revisions: TIMEDELn analyses the largest eigenvalues of the time-delay matrix and identifies those with resonance features which are then separated and fitted [6]. It has been modularized with calls to external libraries and user supplied routines abstracted for ease of modification. It has been parallelized, with a choice of a specific module allowing multi-level parallel structures or serial execution if preferred. It can run bulk simulations of ‘similar but different’ calculations (for example, varying fixed-nuclear geometries). Restrictions: When ‘target’ energies are calculated or supplied, the energy of the incident particle (electron) is currently defined with respect to the lowest supplied target energy (the ground state), although an expert user or developer would be able to modify this. Unusual features: TIMEDELn can be run from a user-supplied file for K-matrices or can be implemented to generate these as required. External routines/libraries: Lapack [1], Minpack [2], options for alternatives (e.g. NAG [3]), option for MPI [4] Additional comments: TIMEDELn has been implemented as part of the UKRMol suite of codes [7]. [1] E. Anderson et al., LAPACK Users’ Guide, third edition, (Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1999) http://www.netlib.org/lapack/ [2] LMDIF1 and dependencies, MINPACK Fortran numerical library (University of Chicago, Argonne National Laboratory, USA, 1999), http://www.netlib.org/minpack/ [3] NAG Fortran Library Mark 25 (Numerical Algorithms Group, Oxford, UK, 2015), http://www.nag.co.uk/numeric/fl/FLdescription.asp/ [4] The Message Passing Interface, standards for MPI are available from the MPI Forum, http://www.mpi-forum.org/ [5] D.T. Stibbe and J. Tennyson, Computer Phys. Comms. , 114 , 236–242 (1998). [6] D.A. Little and J. Tennyson, J. Phys. B: At. Mol. Opt. Phys. , 47 , 105204 (2014). [7] J.M. Carr, P.G. Galiatsatos, J.D. Gorfinkiel, A.G. Harvey, M.A. Lysaght, D. Madden, Z. Masin, M. Plummer, J. Tennyson, H.N. Varambhia, Eur. Phys. J. D , 66 , 58 (2012)


Computers & Fluids | 2011

Optimizing Code_Saturne computations on Petascale systems

Yvan Fournier; Jérôme Bonelle; Charles Moulinec; Z. Shang; Andrew G. Sunderland; Juan Uribe


Archive | 2006

An Investigation of Simultaneous Multithreading on HPCx

Alan Gray; Joachim Hein; Martin Plummer; Andrew G. Sunderland; Lorna Smith; A. Simpson; A. Trew


Archive | 2006

A Performance Comparison of HPCx Phase 2a to Phase 2

Alan Gray; Mike Ashworth; Stephen Booth; Ian J. Bush; Martyn F. Guest; Joachim Hein; David Henty; Martin Plummer; Fiona Reid; Andrew G. Sunderland; Arthur Trew

Collaboration


Dive into the Andrew G. Sunderland's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joachim Hein

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Lorna Smith

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge