Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Parsons is active.

Publication


Featured researches published by Mark Parsons.


adaptive hardware and systems | 2007

Maxwell - a 64 FPGA Supercomputer

Robert Baxter; Stephen Booth; Mark Bull; Geoff Cawood; James Perry; Mark Parsons; Alan Simpson; Arthur Trew; Andrew McCormick; Graham Smart; Ronnie Smart; Allan J. Cantle; Richard Chamberlain; Gildas Genest

We present the initial results from the FHPCA Supercomputer project at the University of Edinburgh. The project has successfully built a general-purpose 64 FPGA computer and ported to it three demonstration applications from the oil, medical and finance sectors. This paper describes in brief the machine itself - Maxwell - its hardware and software environment and presents very early benchmark results from runs of the demonstrators.


adaptive hardware and systems | 2007

High-Performance Reconfigurable Computing - the View from Edinburgh

Robert Baxter; Stephen Booth; Mark Bull; Geoff Cawood; Kenton D'Mellow; Xu Guo; Mark Parsons; James Perry; Alan Simpson; Arthur Trew

This paper reviews the current state of the art in highperformance reconfigurable computing (HPRC) from the perspective of EPCC, the high-performance computing centre at the University of Edinburgh. We look at architectural and programming trends and assess some of the challenges that HPRC needs to address in order to drive itself across the chasm from the optimistic early adopters to the pragmatic early majority.


adaptive hardware and systems | 2007

The FPGA High-Performance Computing Alliance Parallel Toolkit

Robert Baxter; Stephen Booth; Mark Bull; Geoff Cawood; James Perry; Mark Parsons; Alan Simpson; Arthur Trew; Andrew McCormick; Graham Smart; Ronnie Smart; Allan J. Cantle; Richard Chamberlain; Gildas Genest

We describe the FPGA HPC Alliances parallel toolkit (PTK), an initial step towards the standardization of high-level configuration and APIs for high-performance reconfigurable computing (HPRC). We discuss the motivation and challenges of reaping the performance benefits of FPGAs for memory-bound HPC codes and describe the approach we have taken on the FHPCA supercomputer Maxwell.


Lecture Notes in Computer Science | 2004

EU Funded Grid Development in Europe

Paul S. Graham; Matti Heikkurinen; Jarek Nabrzyski; Ariel Oleksiak; Mark Parsons; Heinz Stockinger; Kurt Stockinger; Maciej Stroiński; Jan Węglarz

Several Grid projects have been established that deploy a “first generation Grid”. In order to categorise existing projects in Europe, we have developed a taxonomy and applied it to 20 European Grid projects funded by the European Commission through the Framework 5 IST programme. We briefly describe the projects and thus provide an overview of current Grid activities in Europe. Next, we suggest future trends based on both the European Grid activities as well as progress of the world-wide Grid community. The work we present here is a source of information that aims to help to promote European Grid development.


ieee international conference on high performance computing, data, and analytics | 2015

Feasibility Study of Porting a Particle Transport Code to FPGA

Iakovos Panourgias; Michele Weiland; Mark Parsons; David Turland; Dave Barrett; Wayne Gaudin

In this paper we discuss porting a particle transport code, which is based on a wavefront sweep algorithm, to FPGA. The original code is written in Fortran90. We describe the key differences between general purpose CPUs and Field Programmable Gate Arrays (FPGAs) and provide a detailed performance model of the FPGA. We describe the steps we took when porting the Fortran90 code to FPGA. Finally, the paper will present results from an extensive benchmarking exercise using a Virtex 6 FPGA.


ieee international conference on high performance computing data and analytics | 2015

Innovative Algorithms for Extreme Scale Computing

Fr; ric Magoulès; Mark Parsons; Lorna Smith

For the past thirty years, the need for ever greater supercomputer performance has driven the development of many computing technologies which have subsequently been exploited in the mass market. Delivering an exaflop (or a million million million calculations per second) by the end of this decade is the challenge that the supercomputing community worldwide has set itself. Developing techniques and solutions which address the most difficult challenges that computing at the exascale can provide is a real challenge. Equipment vendors, programming tools providers, academics, and end users must all work together to build and to develop the development and debugging environment, algorithms and libraries, user tools, and the underpinning and cross-cutting technologies required to support the execution of applications at the exascale. This special issue of the journal is dedicated to advances in high performance computing in engineering and the way to exascale. It contains some papers which have been selected from the Exascale Applications and Software Conference (EASC2013) held on 9–11 April 2013 in Edinburgh, United Kingdom. The issue contains five papers, illustrates the recent advances made in the exascale path and covers algorithms, implementations and applications to solve large scale engineering problems. The first paper, by Reverdy et al., reports the realisation of the first cosmological simulations on the scale of the whole observable universe. The paper first focuses on the numerical aspects of two new simulations. In practice, each one of these simulations has evolved 550 billion dark matter particles in an adaptive mesh refinement grid, and one of the new simulations has pushed back the total number of grid points from 2000 billion for the L Cold Dark Matter (L CDM) model to 2200 billion due to the formation of a larger number of structures. The authors highlight the optimisations and adjustments required to run such a set of simulations and then summarise some important lessons learnt toward future exascale computing projects. Numerical examples illustrate the effectiveness of the procedure on 4752 nodes of the Curie Bull supercomputer. The second paper, by Mozdzynski et al., presents the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) enhanced to use Fortran2008 coarrays to overlap computation and communication in the context of OpenMP parallel regions. Today ECMWF runs a 16 km global T1279 operational weather forecast model using 1536 cores. Following the historical evolution in resolution upgrades, ECMWF could expect to be running a 2.5 km global forecast model by 2030 on an exascale system that should be available and hopefully affordable by then. To achieve this would require IFS to run efficiently on about 1000 times the number of cores it uses today. This is a significant challenge that is addressed in this paper, where, after implementing an initial set of improvements, ECMWF is demonstrated running a 10 km global model efficiently on over 40,000 cores on the HECToR Cray XE6 supercomputer. The third paper, by Gray et al., describes a multiGPU implementation of the Ludwig application, which specialises in simulating a variety of complex fluids via lattice Boltzmann fluid dynamics coupled to additional physics describing complex fluid constituents. The authors describe the methodology in augmenting the original CPU version with GPU functionality in a maintainable fashion. After presenting several optimisations that maximise performance on the GPU architecture through tuning for the GPU memory hierarchy, they describe how to implement particles within the fluid in such a way as to avoid a major diversion of the CPU and GPU codebases, whilst minimising data transfer at each timestep. Numerical results show that the application demonstrates excellent scaling to at least 8192 GPUs in parallel (the largest system tested at the time of writing) on the Titan Cray XK7 supercomputer. Exascale computers are expected to have highly hierarchical architectures with nodes composed of multiple core processors (CPU) and accelerators (GPU). The different programming levels generate new difficulties and algorithms issues. The paper written by Magoules et al., presents Alinea, which stands for Advanced LINEar Algebra, a library well suited for hybrid CPU/


eScience '08. IEEE Fourth International Conference on | 2008

eScience, 2008. eScience '08. IEEE Fourth International Conference on

Alistair Grant; Mario Antonioletti; Alastair Hume; Amrey Krause; Bartosz Dobrzelecki; Michael Jackson; Mark Parsons; Malcolm P. Atkinson; Elias Theocharopoulos

In modern distributed computing, vast amounts of data are stored in many different formats, employing different storage solutions. OGSA-DAI 3.0 is a middleware software solution. It provides application developers with the means to access data distributed across multiple platforms with different native access mechanisms. Data integration can take place at the server and deliver results using a variety of protocols and mechanisms within OGSA-DAI. It accomplishes this by using a highly flexible and extensible framework which can accommodate different types of data resources, such as XML databases, relational databases or files, different operations such as transformation to different formats, selection or filter operations. The framework can be extended by a developer to provide customized functionality for project specific tasks while using generic functions for common tasks such as database querying. This paper presents an overview of OGSA-DAI and how it tackles data access and integration through a set of example use cases.


adaptive hardware and systems | 2007

Hybrid Communication Medium for Adaptive SoC Architectures

Robert Baxter; Stephen Booth; Mark Bull; Geoff Cawood; Kenton D'Mellow; Xu Guo; Mark Parsons; James Perry; Alan Simpson; Arthur Trew

This paper reviews the current state of the art in highperformance reconfigurable computing (HPRC) from the perspective of EPCC, the high-performance computing centre at the University of Edinburgh. We look at architectural and programming trends and assess some of the challenges that HPRC needs to address in order to drive itself across the chasm from the optimistic early adopters to the pragmatic early majority.


Fourth UK e-Science All Hands Meeting | 2005

Proceedings of the UK e-Science All Hands Meeting 2005

Mario Antonioletti; Neil Chue Hong; Alastair Hume; Michael Jackson; Kostas Karasavvas; Amy Krause; Jennifer M. Schopf; Malcolm Atkinson; Bartosz Dobrzelecki; Malcolm Illingworth; Nicola McDonnell; Mark Parsons; Elias Theocharopoulos


Archive | 2006

Profiling OGSA-DAI Performance for Common Use Patterns

Bartosz Dobrzelecki; Mario Antonioletti; Jennifer M. Schopf; Alastair Hume; Malcolm P. Atkinson; N. P. Chue Hong; Mike Jackson; Kostas Karasavvas; Amrey Krause; Mark Parsons; Tom Sugden; Elias Theocharopoulos

Collaboration


Dive into the Mark Parsons's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oscar Corcho

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur Trew

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Rob Baxter

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge