Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John H. Merlin is active.

Publication


Featured researches published by John H. Merlin.


Proceedings of the First International ACPC Conference on Parallel Computation | 1991

ADAPTing Fortran 90 Array Programs for Distributed Memory Architectures

John H. Merlin

We describe a system that we are developing, whose purpose is to automatically transform data parallel Fortran 90 programs for execution on MIMD distributed memory architectures. The system is called ADAPT (for ‘Array Distribution Automatic Parallelisation Tool’). Programs for the system should make full use of the array features of Fortran 90, as parallelism is automatically extracted from the array syntax. Parallelisation is by data-partitioning, guided by ‘distribution’ declarations that the user inserts in his program-these being the only additions required to standard Fortran 90 programs. This paper gives a brief overview of the array features of Fortran 90, describes the ‘distribution’ declarations required by ADAPT, and gives details of the parallelisation scheme.


Scientific Programming | 1995

An Introduction to High Performance Fortran

John H. Merlin; Anthony J. G. Hey

High Performance Fortran (HPF) is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.


Future Generation Computer Systems | 1999

Multiple data parallelism with HPF and KeLP

John H. Merlin; Scott B. Baden; Stephen J. Fink; Barbara M. Chapman

Abstract High Performance Fortran (HPF) is an effective language for implementing regular data parallel applications on distributed memory architectures, but it is not well suited to irregular, block-structured applications such as multiblock and adaptive mesh methods. A solution to this problem is to use an SPMD program to coordinate multiple concurrent HPF tasks, each operating on a regular subgrid of the multiblock domain. This paper presents such a system, in which the coordination layer is provided by the C++ class library KeLP. We describe the KeLP–HPF implementation and programming model, and show an example KeLP–HPF multiblock solver together with performance results.


Concurrency and Computation: Practice and Experience | 2001

Parallel versions of Stone's strongly implicit algorithm

Jeff Reeve; Anthony Scurr; John H. Merlin

In this paper, we describe various methods of deriving a parallel version of Stones Strongly Implicit Procedure (SIP) for solving sparse linear equations arising from finite difference approximation to partial differential equations (PDEs). Sequential versions of this algorithm have been very successful in solving semi‐conductor, heat conduction and flow simulation problems and an efficient parallel version would enable much larger simulations to be run. An initial investigation of various parallelizing strategies was undertaken using a version of high performance Fortran (HPF) and the best methods were reprogrammed using the MPI message passing libraries for increased efficiency. Early attempts concentrated on developing a parallel version of the characteristic wavefront computation pattern of the existing sequential SIP code. However, a red‐black ordering of grid points, similar to that used in parallel versions of the Gauss–Seidel algorithm, is shown to be far more efficient. The results of both the wavefront and red‐black MPI based algorithms are reported for various size problems and number of processors on a sixteen node IBM SP2. Copyright


european conference on parallel processing | 1999

FITS - A Light-Weight Integrated Programming Environment

Barbara M. Chapman; François Bodin; L. Hill; John H. Merlin; G. Viland; Fritz Georg Wollenweber

Few portable programming environments exist to support the labour-intensive process of application development for parallel systems. Popular stand-alone tools for analysis, restructuring, debugging and performance optimisation have not been successfully combined to create integrated development environments.In the ESPRIT project FITS we have created such a toolset, based upon commercial and research tools, for parallel application development. Component tools are loosely coupled; with little modification, they may invoke functions from other components. Thus integration comes at minimal cost to the vendor, who retains vital product independence. The FITS interface is publically available and the toolset is easily extensible.


ieee international conference on high performance computing data and analytics | 1998

Multiple Data Parallelism with HPF and KeLP

John H. Merlin; Scott B. Baden; Stephen J. Fink; Barbara M. Chapman

High Performance Fortran (HPF) is an effective language for implementing regular data parallel applications on distributed memory architectures, but it is not well suited to irregular, block-structured applications such as multiblock and adaptive mesh methods. A solution to this problem is to use a non-HPF SPMD program to coordinate multiple concurrent HPF tasks, each operating on a regular subgrid of an irregular data domain. To this end we have developed an interface between the C++ class library KeLP, which supports irregular, dynamic block-structured applications on distributed systems, and an HPF compiler, SHPF. This allows KeLP to handle the data layout and inter-block communications, and to invoke HPF concurrently on each block. There are a number of advantages to this approach: it combines the strengths of both KeLP and HPF; it is relatively easy to implement; and it involves no extensions to HPF or HPF compilers. This paper describes the KeLP-HPF implementation and programming model, and shows an example KeLP-HPF multiblock solver.


The Journal of Supercomputing | 2000

Program Development Tools for Clusters of Shared Memory Multiprocessors

Barbara M. Chapman; John H. Merlin; D. Pritchard; François Bodin; Yann Mével; Tor Sørevik; L. Hill

Applications are increasingly being executed on computational systems that have hierarchical parallelism. There are several programming paradigms which may be used to adapt a program for execution in such an environment. In this paper, we outline some of the challenges in porting codes to such systems, and describe a programming environment that we are creating to support the migration of sequential and MPI code to a cluster of shared memory parallel systems, where the target program may include MPI, OpenMP or both. As part of this effort, we are evaluating several experimental approaches to aiding in this complex application development task.


Science | 1988

Topological solutions in gauge theory and their computer graphic representation.

Anthony J. G. Hey; John H. Merlin; Martin William Ricketts; Michael T. Vaughn; David C. Williams

A diverse range of physical phenomena, both observed and hypothetical, are described by topological solutions to nonlinear gauge field theories. Computer-generated color graphic displays can provide a clear and detailed representation of some of these solutions, which might otherwise be physically unintelligible because of their mathematical complexity. Graphical representations are presented here for two topological solutions: (i) the solutions of a model that represents the filaments of quantized magnetic flux in a superconductor, and (ii) the solutions of an SO(3) gauge theory corresponding to a pair of separated magnetic monopoles. An introduction is provided to the gauge field theories giving rise to these solutions.


Concurrency and Computation: Practice and Experience | 1995

The genesis distributed memory benchmarks. Part 2: COMMS1, TRANS1, FFT1 and QCD2 benchmarks on the suprenum and IPSC/860 computers

Anthony J. G. Hey; Roger W. Hockney; Vladimir Getov; I. C. Wolton; John H. Merlin; James Allwright

The Genesis benchmark suite has been assembled to evaluate the performance of distributed-memory MIMD systems. The problems selected all have a scientific origin (mostly from physics or theoretical chemistry), and range from synthetic code fragments designed to measure the basic hardware properties of the computer (especially communication and synchronisation overheads), through commonly used library subroutines, to full application codes. This is the second of a series of papers on the Genesis distributed-memory benchmarks, which were developed under the European ESPRIT research program. Results are presented for the SUPRENUM and iPSC/860 computers when running the following benchmarks: COMMS1 (communications), TRANS1 (matrix transpose), FFT1 (fast Fourier transform) and QCD2 (conjugate gradient kernel). The theoretical predictions are compared with, or fitted to, the measured results, and then used to predict (with due caution) how the performance might scale for larger problems and more processors than were actually available during the benchmarking.


Concurrency and Computation: Practice and Experience | 1993

The Genesis distributed-memory benchmarks. Part I: Methodology and general relativity benchmark with results for the SUPRENUM computer

Cliff Addison; James Allwright; Norman Binsted; Nigel Bishop; Bryan Carpenter; Peter Dalloz; David Gee; Vladimir Getov; Roger W. Hockney; Max Lemke; John H. Merlin; Mark Pinches; Chris Scott; I.C. Wolton

Collaboration


Dive into the John H. Merlin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Reeve

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

James Allwright

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vladimir Getov

University of Westminster

View shared research outputs
Top Co-Authors

Avatar

Scott B. Baden

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Scurr

University of Southampton

View shared research outputs
Researchain Logo
Decentralizing Knowledge