Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip W. Trinder is active.

Publication


Featured researches published by Philip W. Trinder.


International East/West Database Workshop | 1990

Bulk types for large scale programming

Malcolm P. Atkinson; Philippe Richard; Philip W. Trinder

Work on the design of constructors for bulk data types is reported. It introduces highly parametric constructors, parameterised both by types and by properties other than types. Such constructors we call type-quarks. The motivation for and properties of bulk types are discussed. The two examples of bulk types provided via type-quarks, maps and quads, are discussed. Several important questions about this approach to bulk types are identified.


international conference on parallel architectures and languages europe | 1993

Processing Transactions on GRIP, a Parallel Graph Reducer

Gert Akerholt; Kevin Hammond; Simon L. Peyton Jones; Philip W. Trinder

The GRIP architecture allows efficient execution of functional programs on a multi-processor built from standard hardware components. State-of-the-art compilation techniques are combined with sophisticated runtime resource-control to give good parallel performance. This paper reports the results of running GRIP on an application which is apparently unsuited to the basic functional model: a database transaction manager incorporating updates as well as lookup transactions. The results obtained show good relative speedups for GRIP, with real performance advantages over the same application executing on sequential machines.


symposium/workshop on haskell | 2014

The HdpH DSLs for scalable reliable computation

Patrick Maier; Robert J. Stewart; Philip W. Trinder

The statelessness of functional computations facilitates both parallelism and fault recovery. Faults and non-uniform communication topologies are key challenges for emergent large scale parallel architectures. We report on HdpH and HdpH-RS, a pair of Haskell DSLs designed to address these challenges for irregular task-parallel computations on large distributed-memory architectures. Both DSLs share an API combining explicit task placement with sophisticated work stealing. HdpH focuses on scalability by making placement and stealing topology aware whereas HdpH-RS delivers reliability by means of fault tolerant work stealing. We present operational semantics for both DSLs and investigate conditions for semantic equivalence of HdpH and HdpH-RS programs, that is, conditions under which topology awareness can be transparently traded for fault tolerance. We detail how the DSL implementations realise topology awareness and fault tolerance. We report an initial evaluation of scalability and fault tolerance on a 256-core cluster and on up to 32K cores of an HPC platform.


british national conference on databases | 1994

Object Comprehensions: A Query Notation for Object-Oriented Databases

Daniel K. C. Chan; Philip W. Trinder

Existing object-oriented query notations have been criticised for being unclear, verbose, restrictive, and computationally weak. This paper introduces a new query notation, object comprehensions, that allows queries to be expressed clearly, concisely, and processed efficiently. Object comprehensions are designed for object-oriented databases and include features that are missing from or inadequate in existing object-oriented query languages. Novel features include: a predicate-based optimisable sub-language providing support for the class hierarchy; numerical quantifiers for dealing with occurrences of collection elements; operations addressing collection elements by position and order; a high-level support for interaction between different collection kinds; and recursive queries with computation.


Computer Languages, Systems & Structures | 2014

Reliable scalable symbolic computation: The design of SymGridPar2

Patrick Maier; Robert J. Stewart; Philip W. Trinder

Abstract Symbolic computation is an important area of both Mathematics and Computer Science, with many large computations that would benefit from parallel execution. Symbolic computations are, however, challenging to parallelise as they have complex data and control structures, and both dynamic and highly irregular parallelism. The SymGridPar framework (SGP) has been developed to address these challenges on small-scale parallel architectures. However the multicore revolution means that the number of cores and the number of failures are growing exponentially, and that the communication topology is becoming increasingly complex. Hence an improved parallel symbolic computation framework is required. This paper presents the design and initial evaluation of SymGridPar2 (SGP2), a successor to SymGridPar that is designed to provide scalability onto 10 5 cores, and hence also provide fault tolerance. We present the SGP2 design goals, principles and architecture. We describe how scalability is achieved using layering and by allowing the programmer to control task placement. We outline how fault tolerance is provided by supervising remote computations, and outline higher-level fault tolerance abstractions. We describe the SGP2 implementation status and development plans. We report the scalability and efficiency, including weak scaling to about 32,000 cores, and investigate the overheads of tolerating faults for simple symbolic computations.


implementation and application of functional languages | 1997

Parallelising a Large Functional Program or: Keeping LOLITA Busy

Hans-Wolfgang Loidl; Richard G. Morgan; Philip W. Trinder; Sanjay Poria; Chris Cooper; Simon L. Peyton Jones; Roberto Garigliano

In this paper we report on the ongoing parallelisation of LOLITA, a natural language engineering system. Although LOLITA currently exhibits only modest parallelism, we believe that it is the largest parallel functional program ever, comprising more than 47,000 lines of Haskell. LOLITA has the following interesting features common to real world applications of lazy languages: the code was not specifically designed for parallelism; laziness is essential for efficiency in LOLITA; LOLITA interfaces to data structures outside the Haskell heap, using a foreign language interface; LOLITA was not written by those most closely involved in the parallelisation.


database programming languages | 1993

Building an Integrated Persistent Application

Dag I. K. Sjøberg; Malcolm P. Atkinson; João Correia Lopes; Philip W. Trinder

The major motivation for database programming language (DBPL) research is to facilitate the construction and maintenance of large dataintensive applications. To fully benefit from DBPLs, supporting methodologies and tools are needed. This paper describes the construction of a multi-author, multi-level thesaurus application (TA). Some tools and methodologies were used in the TA construction, and requirements for other tools and methodologies are identified as the result of our experiences. Although built in a specific language (Napier88), the principles discovered apply to other DBPLs.


Z User Workshop | 1994

An Object-Oriented Data Model Supporting Multi-Methods, Multiple Inheritance, and Static Type Checking: A Specification in Z

Daniel K. C. Chan; Philip W. Trinder

This paper presents an object-oriented data model which supports all the essential features found in existing object-oriented data models. More importantly, it simultaneously supports multiple inheritance, method overloading together with static type checking. The model differs from other models in that less restrictions are imposed on defining overloaded methods; also better matching is provided at run-time between actual arguments and overloaded methods. Specifying the model in Z helps to overcome the ambiguity problems found in less formal approaches. Besides, one can reason about the properties of the model. The specification demonstrates the use of Z as a formal technique in an area where such a definition is greatly needed.


Concurrency and Computation: Practice and Experience | 2016

HPC-GAP: engineering a 21st-century high-performance computer algebra system

Reimer Behrends; Kevin Hammond; Vladimir Janjic; Alexander Konovalov; Stephen A. Linton; Hans-Wolfgang Loidl; Patrick Maier; Philip W. Trinder

Symbolic computation has underpinned a number of key advances in Mathematics and Computer Science. Applications are typically large and potentially highly parallel, making them good candidates for parallel execution at a variety of scales from multi‐core to high‐performance computing systems. However, much existing work on parallel computing is based around numeric rather than symbolic computations. In particular, symbolic computing presents particular problems in terms of varying granularity and irregular task sizes that do not match conventional approaches to parallelisation. It also presents problems in terms of the structure of the algorithms and data. This paper describes a new implementation of the free open‐source GAP computational algebra system that places parallelism at the heart of the design, dealing with the key scalability and cross‐platform portability problems. We provide three system layers that deal with the three most important classes of hardware: individual shared memory multi‐core nodes, mid‐scale distributed clusters of (multi‐core) nodes and full‐blown high‐performance computing systems, comprising large‐scale tightly connected networks of multi‐core nodes. This requires us to develop new cross‐layer programming abstractions in the form of new domain‐specific skeletons that allow us to seamlessly target different hardware levels. Our results show that, using our approach, we can achieve good scalability and speedups for two realistic exemplars, on high‐performance systems comprising up to 32000 cores, as well as on ubiquitous multi‐core systems and distributed clusters. The work reported here paves the way towards full‐scale exploitation of symbolic computation by high‐performance computing systems, and we demonstrate the potential with two major case studies.


The Computer Journal | 1994

Evaluating object-oriented query languages

Daniel K. C. Chan; Philip W. Trinder; Ray Welland

Different query languages have been implemented and others proposed for object-oriented database systems. Evaluating and comparing these languages has been difficult due to the lack of a frame of reference. This paper establishes such a framework using four dimensions: support of ob object-orientation, expressive power, support of collections, and usability. Each dimension is defined in terms of a number of criteria. The criteria are, in turn, explained using example queries written in a concise, expressive, and clear query notation: object comprehensions. These same examples also demonstrate the process of evaluating a query language by showing how the criteria can be assessed. An evaluation based on the proposed framework reveals that many well-known query languages do not meet all the criteria. The evaluation framework can also be used constructively in improving existing query languages and directing new query language design

Collaboration


Dive into the Philip W. Trinder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Hammond

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge