Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Jackson is active.

Publication


Featured researches published by Adrian Jackson.


symbolic and numeric algorithms for scientific computing | 2013

Stepping Up

Adrian Jackson

Computational simulation is an important research tool for modern scientists. There are a range of different scales of high performance computing (HPC) resources available to scientists, from laptop and desktop machines, to small institutional clusters, to national HPC resources, and the largest parallel computers in the world. This paper outlines the challenges that developers and users face moving from small scale computational resources to larger scale parallel machines. We present an overview of various research efforts to improve performance on large scale systems to enable users and developers to gain an understanding of the performance issues often encountered by simulation codes.


Volume 1: Aircraft Engine; Ceramics; Coal, Biomass and Alternative Fuels; Wind Turbine Technology | 2011

On the Parallelization of a Harmonic Balance Compressible Navier-Stokes Solver for Wind Turbine Aerodynamics

Adrian Jackson; M. Sergio Campobasso; Mohammad H. Baba-Ahmadi

The paper discusses the parallelization of a novel explicit harmonic balance Navier-Stokes solver for wind turbine unsteady aerodynamics. For large three-dimensional problems, the use of a standard MPI parallelization based on the geometric domain decomposition of the physical domain may require an excessive degree of partitioning with respect to that needed when the same aerodynamic analysis is performed with the time-domain solver. This occurrence may penalize the parallel efficiency of the harmonic balance solver due to excessive communication among MPI processes to transfer halo data. In the case of the harmonic balance analysis, the necessity of further grid partitioning may arise because the memory requirement of each block is higher than for the time-domain analysis: it is that of the time-domain analysis multiplied by a variable proportional to the number of complex harmonics used to represent the sought periodic flow field. A hybrid multi-level parallelization paradigm for explicit harmonic balance Navier-Stokes solvers is presented, which makes use of both distributed and shared memory parallelization technologies, and removes the need for further domain decomposition with respect to the case of the time-domain analysis. The discussed parallelization approaches are tested on the multigrid harmonic balance solver being developed by the authors, considering various computational configurations for the CFD analysis of the unsteady flow field past the airfoil of a wind tubine blade in yawed wind.


parallel, distributed and network-based processing | 2010

A European Infrastructure for Fusion Simulations

Pär Strand; B. Guillerminet; Isabel Campos Plasencia; Jose M. Cela; Rui Coelho; David Coster; L.-G. Eriksson; Matthieu Haefele; Francesco Iannone; F. Imbeaux; Adrian Jackson; G. Manduchi; Michal Owsiak; Marcin Płóciennik; Alejandro Soba; Eric Sonnendrücker; Jan Westerholm

The Integrated Tokamak Modelling Task Force (ITM-TF) is developing an infrastructure where the validation needs, as being formulated in terms of multi-device data access and detailed physics comparisons aiming for inclusion of synthetic diagnostics in the simulation chain, are key components. A device independent approach to data transport and a standardized approach to data management (data structures, naming, and access) is being developed in order to allow cross validation between different fusion devices using a single toolset. The effort is focused on ITER plasmas and ITER scenario development on current fusion device. The modeling tools are, however, aimed for general use and can be promoted in other areas of modelling as well. Extensive work has already gone into the development of standardized descriptions of the data (Consistent Physical Objects) providing initial steps towards a complete fusion modelling ontology. The longer term aim is a complete simulation platform which is expected to last and be extended in different ways for the coming 30 years. The technical underpinning is therefore of vital importance. In particular, the platform needs to be extensible and open-ended to be able to take full advantage of not only today’s most advanced technologies but also be able to marshal future developments. A full level comprehensive prediction of ITER physics rapidly becomes expensive in terms of computing resources and may cover a range of computing paradigms. The simulation framework therefore needs to be able to use both grid and HPC computing facilities. Hence, data access and code coupling technologies are required to be available for a heterogeneous, possibly distributed, environment. The developments in this area are pursued in a separate project - EUFORIA (EU Fusion for ITER Applications). The current status of ITM-TF and EUFORIA is presented and discussed.


PLOS ONE | 2017

BeatBox - HPC simulation environment for biophysically and anatomically realistic cardiac electrophysiology

Mario Antonioletti; Vadim N. Biktashev; Adrian Jackson; Sanjay Kharche; Tomas Stary; Irina V. Biktasheva

The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.


parallel, distributed and network-based processing | 2010

EUFORIA HPC: Massive Parallelisation for Fusion Community

Adrian Jackson; Adam Carter; Joachim Hein; Mats Aspnäs; M. Ropo; Alejandro Soba

One of the central tasks of EUFORIA is to port, parallelise, and optimise fusion simulation codes, developed at individual research institutes in Europe. There are three supercomputer centres involved in the project located at Barcelona, Edinburgh, and Helsinki. For some of the fusion codes simply porting them to one of the supercomputers represents a major advancement in the use of the codes, as they until now have mainly been used by a small user community, or even exclusively by the author of the code. Also, where codes currently can only use one processor (i. e. are serial) providing any parallel functionality can be of major benefit to the code and the code owner(s). Many of the simulation codes for edge and core transport modelling of fusion plasma using high performance computing are estimated to currently require weeks or months of execution time to simulate science at a scale required to model the new fusion reactor ITER, and therefore these codes have to be optimised to run as fast as possible and parallelised in such a way that computer resources are used as effectively as possible. During the first fifteen month of the project, we have successfully ported eleven fusion codes to the supercomputers in Barcelona, Edinburgh and Helsinki. The installation procedure, library requirements and runtime scripts have been documented for each code, and deposited in the EUFORIA software repository and code revision system. Following this a number of these codes have been chosen for code optimisation and improvements in parallelisation and this paper outlines the experience that we have had with some of these codes, the performance improvements achieved, and the techniques used.


parallel, distributed and network-based processing | 2011

High Performance I/O

Adrian Jackson; Fiona Reid; Joachim Hein; Alejandro Soba; Xavier Sáez

Parallelisation, serial optimisation, compiler tuning, and many more techniques are used to optimise and improve the performance scaling of parallel programs. One area which is frequently not optimised is file I/O. This is because it is often not considered to be key to the performance of a program and also because it is traditionally difficult to optimise and very machine specific. However, in the current era of Peta- and Exascale computing it is no longer possible to ignore I/O performance as it can significantly limit the scaling of many codes when executing on very large numbers of processors or cores. Furthermore, as producing data is the main purpose of most simulation codes any work that can be undertaken to provide improved performance of I/0 can be applicable to a very large range of simulation codes, and provide them with improved functionality (i.e. the ability to produce more data).This paper describes some of the issues surrounding I/O, the technology that is commonly deployed to provide I/O on HPC machines and the software libraries available to programmers to undertake I/O. The performance of all these aspects of I/O on arange of HPC systems were investigated by the authors and a represented in this paper to motivate the discussions in the paper.


Archive | 2017

Networked Carbon Markets: Permissionless Innovation with Distributed Ledgers?

Adrian Jackson; Ashley D. Lloyd; Justin D Macinante; Markus Hüwener

Carbon markets are key components in the climate change mitigation response, enabling a price to be placed on carbon emissions. Connecting these markets has the potential to allow a more integrated, efficient, and globally consistent price on carbon which will promote greater confidence in the market, investment and, ultimately, help foster new technology development through climate finance. The challenge to connecting carbon markets is that each individual carbon market has its own legal and regulatory framework and its own rules for assigning and accounting for the carbon units traded. This presents significant legal and political hurdles that rule out a single, global, carbon market being established. An alternative, ‘bottom-up’ solution, to enable carbon trading between a range of markets without forcing legal and regulatory homogeneous standardisation and conformity on those markets, would be a more practical way to connect them. One candidate technology to facilitate such connection, is the ‘Distributed Ledger’ (DL), which provides the combination of a distributed database with public/private key encryption and a decentralised infrastructure. This potentially allows for innovative solutions to data sharing, or transaction management application areas, making it a good first-order match to the emerging requirements for an interoperable carbon market infrastructure. To meet the objectives of the Paris Agreement, a solution needs to be found that facilitates a global-scale distributed infrastructure, which allows a diverse set of markets and participants to utilise and exploit it. The established literature on innovation and technology diffusion gives guidance on this issue, but no ability to predict success. The purpose of this White Paper, therefore, is to outline the most important questions identified in relation to the connecting of carbon markets through the application of DL technologies, and outline the authors’ current thoughts on those questions.


IEEE Transactions on Parallel and Distributed Systems | 2015

Optimising Performance through Unbalanced Decompositions

Adrian Jackson; Joachim Hein; Colin Roach

When significant communication costs arise in the solution of multidimensional problems on parallel computers, optimal performance cannot always be achieved by perfectly balancing the computational load across cores. Modest sacrifices in the computational load balance may facilitate substantial overall performance improvements by achieving large savings in the costs associated with communications. This general approach is illustrated by application to GS2, an initial value gyrokinetic simulation code developed to study low-frequency turbulence in magnetized plasma. GS2 is parallelised using MPI with the simulation domain decomposed across tasks. The optimal domain decomposition is non-trivial, and is complicated by the fact that several domain decompositions are needed and that these do not all optimise at the chosen task count. Application to GS2, of the novel approach outlined in this paper, has improved performance by up to 17 percent for a representative simulation. Similar strategies may be beneficial in a broader class of problems.


symbolic and numeric algorithms for scientific computing | 2013

Optimised Hybrid Parallelisation of a CFD Code on Many Core Architectures

Adrian Jackson; M. Sergio Campobasso

Reliable aerodynamic and aeroelastic design of wind turbines, aircraft wings and turbomachinery blades increasingly relies on the use of high-fidelity Navier-Stokes Computational Fluid Dynamics codes to predict the strongly nonlinear periodic flows associated with structural vibrations and periodically vary- ing farfield boundary conditions. On a single computer core, the harmonic balance solution of the Navier-Stokes equations has been shown to significantly reduce the analysis runtime with respect to the conventional time-domain approach. The problem size of realistic simulations, however, requires high- performance computing. The Computational Fluid Dynamics COSA code features a novel harmonic balance Navier-Stokes solver which has been previously parallelised using both a pure MPI implementation and a hybrid MPI/OpenMP implementation. This paper presents the recently completed optimisation of both parallelisations. The achieved performance improvements of both parallelisations highlight the effectiveness of the adopted parallel optimisation strategies. Moreover, a comparative analysis of the optimal performance of these two architectures in terms of runtime and power consumption using some of the current common HPC architectures highlights the reduction of both aspects achievable by using the hybrid parallelisation with emerging many-core architectures.


parallel, distributed and network-based processing | 2011

Parallel Optimisation Strategies for Fusion Codes

Adrian Jackson; Fiona Reid; Stephen Booth; Joachim Hein; Mats Aspnäs; Miquel Català; Alejandro Soba

We have previously documented the on-going work in the EUFORIA project to parallelise and optimise European fusion simulation codes. This involves working with a wide range of codes to try and address any performance and scaling issues that these codes have. However, as no two simulation codes are exactly the same, it is very hard to apply exactly the same approach to optimising a disparate range of codes. Indeed, the codes investigated range in terms of performance and ability from well-optimised, highly parallelised codes, to serial or poorly performing codes. After analysing, optimising, and parallelising a range of codes it is, actually, possible to discern a number of distinct optimisation techniques or approaches/strategies that can be used to improve the performance or scaling of a parallel simulation code. This paper outlines the distinct approaches that we have identified, highlighting their benefits and drawbacks, giving an overview of the type of work that is often attempted for fusion simulation code optimisation. performing codes. After analysing, optimising, parallelising, and scaling a range of codes it is, actually, possible to discern a number of distinctoptimisation techniques or approaches/strategies that can be used to improve the performance or scaling of a parallel simulation code. This paper outlines the distinct approaches that we have identified, highlighting their benefits and drawbacks, giving an overview of the type of work that is often attempted for fusion simulation code optimisation.

Collaboration


Dive into the Adrian Jackson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joachim Hein

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fiona Reid

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angus Creech

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge