Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John R. Gurd is active.

Publication


Featured researches published by John R. Gurd.


Communications of The ACM | 1985

The Manchester prototype dataflow computer

John R. Gurd; Chris C. Kirkham; Ian Watson

The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.


Archive | 2001

Euro-Par 2001 Parallel Processing

Rizos Sakellariou; John R. Gurd; Len Freeman; John A. Keane

A software component framework is one where an application designer programs by composing well understood and tested “components” rather than writing large volumes of not-very-reusable code. The software industry has been using component technology to build desktop applications for about ten years now. More recently this idea has been extended to application in distributed systems with frameworks like the Corba Component Model and Enterprise Java Beans. With the advent of Grid computing, high performance applications may be distributed over a wide area network of compute and data servers. Also “peerto-peer” applications exploit vast amounts of parallelism exploiting the resources of thousands of servers. In this talk we look at the problem of building a component technology for scientific applications. The common component architecture project seeks to build a framework that allows software components runing on a massively parallel computers to be linked together to form wide-area, high performance application services that may be accessed from desktop applications. This problem is far from being solved and the talk will describe progress to date and outline some of the difficult problems that remain to be solved. R. Sakellariou et al. (Eds.): Euro-Par 2001, LNCS 2150, pp. 5–5, 2001. c


aspect-oriented software development | 2004

Using AspectJ to separate concerns in parallel scientific Java code

Bruno Harbulot; John R. Gurd

Scientific software frequently demands high performance in order to execute complex models in acceptable time. A major means of obtaining high performance is via parallel execution on multi-processor systems. However, traditional methods of programming for parallel execution can lead to substantial code-tangling where the needs of the mathematical model crosscut with the concern of parallel execution.Aspect-Oriented Programming is an attractive technology for solving the problem of code-tangling in high performance parallel scientific software. The underlying mathematical model and the parallelism can be treated as separate concerns and programmed accordingly. Their elements of code can then be woven together to produce the final application. This paper investigates the extent to which AspectJ technology can be used to achieve the desired separation of concerns in programs from the Java Grande Forum benchmark suite, a set of test applications for evaluation of the performance of Java in the context of numerical computation. The paper analyses three different benchmark programs and classifies the degrees of difficulty in separating concerns within them in a form suitable for AspectJ. This leads to an assessment of the influence of the design of a numerical application on the ability of AspectJ to solve this kind of code-tangling problem. It is concluded that: (1) scientific software is rarely produced in true object-oriented style; and (2) the inherent loop structure of many scientific algorithms is incompatible with the join point philosophy of AspectJ.Since AspectJ cannot intercept the iterations of for-loops (which are at the heart of high-performance computing), various object-oriented models are proposed for describing (embarrassingly parallel) rectangular double-nested forloops that make it possible to use AspectJ for encapsulating parallelisation in an aspect. Finally, a test-case using these models is presented, together with performance results obtained on various Java Virtual Machines.


grid computing | 2007

Market-based grid resource allocation using a stable continuous double auction

Zhu Tan; John R. Gurd

A market-based grid resource allocation mechanism is presented and evaluated. It takes into account the architectural features and special requirements of computational grids while ensuring economic efficiency, even when the underlying resources are being used by self-interested and uncooperative participants. A novel stable continuous double auction (SCDA), based on the more conventional continuous double auction (CDA), is proposed for Grid resource allocation. It alleviates the unnecessarily volatile behaviour of the CDA, while maintaining other beneficial features. Experimental results show that the SCDA is superior to the CDA in terms of both economic efficiency and scheduling efficiency. The SCDA delivers continuous matching, high efficiency and low cost, allied with low price volatility and low bidding complexity. Its ability to deliver immediate allocation and its stable prices facilitate co-allocation of resources and it also enables incremental evolution towards a full grid resource market. Effective market-based Grid resource allocation is thus shown to be feasible.


european conference on parallel processing | 1998

OCEANS - Optimising Compilers for Embedded Applications

Michel Barreteau; François Bodin; Peter J. H. Brinkhaus; Zbigniew Chamski; Henri-Pierre Charles; Christine Eisenbeis; John R. Gurd; Jan Hoggerbrugge; Ping Hu; William Jalby; Peter M. W. Knijnenburg; Michael F. P. O'Boyle; Erven Rohou; Rizos Sakellariou; André Seznec; Elena Stöhr; Menno Anne Treffers; Harry A. G. Wijshoff

This paper presents an overview of the activities carried out within the second year of the ESPRIT project OCEANS whose objective is to combine high and low-level optimisation approaches within an iterative framework for compilation. In this paper we discuss our approach to iterative compilation.


Bioinformatics | 1999

A RAPID algorithm for sequence database comparisons: application to the identification of vector contamination in the EMBL databases.

Crispin J. Miller; John R. Gurd; Andy Brass

MOTIVATION Word-matching algorithms such as BLAST are routinely used for sequence comparison. These algorithms typically use areas of matching words to seed alignments which are then used to assess the degree of sequence similarity. In this paper, we show that by formally separating the word-matching and sequence-alignment process, and using information about word frequencies to generate alignments and similarity scores, we can create a new sequence-comparison algorithm which is both fast and sensitive. The formal split between word searching and alignment allows users to select an appropriate alignment method without affecting the underlying similarity search. The algorithm has been used to develop software for identifying entries in DNA sequence databases which are contaminated with vector sequence. RESULTS We present three algorithms, RAPID, PHAT and SPLAT, which together allow vector contaminations to be found and assessed extremely rapidly. RAPID is a word search algorithm which uses probabilities to modify the significance attached to different words; PHAT and SPLAT are alignment algorithms. An initial implementation has been shown to be approximately an order of magnitude faster than BLAST. The formal split between word searching and alignment not only offers considerable gains in performance, but also allows alignment generation to be viewed as a user interface problem, allowing the most useful output method to be selected without affecting the underlying similarity search. Receiver Operator Characteristic (ROC) analysis of an artificial test set allows the optimal score threshold for identifying vector contamination to be determined. ROC curves were also used to determine the optimum word size (nine) for finding vector contamination. An analysis of the entire expressed sequence tag (EST) subset of EMBL found a contamination rate of 0.27%. A more detailed analysis of the 50 000 ESTs in est10.dat (an EST subset of EMBL) finds an error rate of 0.86%, principally due to two large-scale projects. AVAILABILITY A Web page for the software exists at http://bioinf.man.ac.uk/rapid, or it can be downloaded from ftp://ftp.bioinf.man.ac.uk/RAPID CONTACT: [email protected]


conference on object-oriented programming systems, languages, and applications | 2000

OoLALA: an object oriented analysis and design of numerical linear algebra

Mikel Luján; T. L. Freeman; John R. Gurd

In this paper we review the design of a sequential object oriented linear algebra library, O<sc>O</sc>L<sc>A</sc>L<sc>A</sc>. Several designs are proposed and used to classify existing sequential object oriented libraries. The classification is based on the way that matrices and matrix operations are represented. O<sc>O</sc>L<sc>A</sc>L<sc>A</sc>s representation of matrices is capable of dealing with certain matrix operations that, although mathematically valid, are not handled correctly by existing libraries. O<sc>O</sc>L<sc>A</sc>L<sc>A</sc> also enables implementations of matrix calculations at various abstraction levels ranging from the relatively low-level abstraction of a Fortran BLAS-like implementation to higher-level abstractions that hide many implementation details. O<sc>O</sc>L<sc>A</sc>L<sc>A</sc> addresses a wide range of numerical linear algebra functionality while the reviewed object oriented libraries concen trate on parts of such functionality. We include some preliminary performance results for a Java implementation of O<sc>O</sc>L<sc>A</sc>L<sc>A</sc>.


Philosophical Transactions of the Royal Society A | 2005

Towards performance control on the Grid

Kenneth R. Mayes; Mikel Luján; Graham D. Riley; Jonathan Chin; Peter V. Coveney; John R. Gurd

Advances in computational Grid technologies are enabling the development of simulations of complex biological and physical systems. Such simulations can be assembled from separate components—separately deployable computation units of well-defined functionality. Such an assemblage can represent an application composed of interacting simulations or might comprise multiple instances of a simulation executing together, each running with different simulation parameters. However, such assemblages need the ability to cope with heterogeneous and dynamically changing execution environments, particularly where such changes can affect performance. This paper describes the design and implementation of a prototype performance control system (PerCo), which is capable of monitoring the progress of simulations and redeploying them so as to optimize performance. The ability to control performance by redeployment is demonstrated using an assemblage of lattice Boltzmann simulations running with and without control policies. The cost of using PerCo is evaluated and it is shown that PerCo is able to reduce overall execution time.


euromicro workshop on parallel and distributed processing | 2000

FINESSE: a prototype feedback-guided performance enhancement system

Nandini Mukherjee; Graham D. Riley; John R. Gurd

FINESSE is a prototype environment designed to support rapid development of parallel programs for single-address space computers by both expert and non-expert programmers. The environment provides semi-automatic support for systematic, feedback-based reduction of the various classes of overhead associated with parallel execution. The characterisation of parallel performance by overhead analysis is first reviewed, and then the functionality provided by FINESSE is described. The utility of this environment is demonstrated by using it to develop parallel implementations for an SGI Origin 2000 platform of Tred2, a well-known benchmark for automatic parallelising compilers.


IEEE Transactions on Parallel and Distributed Systems | 1990

Iterative instructions in the Manchester Dataflow Computer

A. P. W. Böhm; John R. Gurd

The authors investigate the nature and extent of the benefits and adverse effects of iterative instructions in the prototype Manchester Dataflow Computer. Iterative instructions are shown to be highly beneficial in terms of the number of instructions executed and the number of tokens transferred between modules during a program run. This benefit is apparent at hardware level, giving significantly reduced program execution times. However, the full benefits are not realized due to interference between lengthy iterative instructions. It is suggested that restructuring of buffers and the function unit array in the prototype hardware configuration can reduce this interference. Other possibilities for improvement are suggested. For example, the slowdown effect observed in hardware speedup curves could be tackled by treating iterative instructions differently from fine-grain instructions. An alternative structure for the processing element in which certain function units are specialized for executing iterative instructions is being investigated in this connection. >

Collaboration


Dive into the John R. Gurd's collaboration.

Top Co-Authors

Avatar

Mikel Luján

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

T. L. Freeman

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Watson

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Harbulot

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John A. Keane

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge