T. L. Freeman
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by T. L. Freeman.
conference on object-oriented programming systems, languages, and applications | 2000
Mikel Luján; T. L. Freeman; John R. Gurd
In this paper we review the design of a sequential object oriented linear algebra library, O<sc>O</sc>L<sc>A</sc>L<sc>A</sc>. Several designs are proposed and used to classify existing sequential object oriented libraries. The classification is based on the way that matrices and matrix operations are represented. O<sc>O</sc>L<sc>A</sc>L<sc>A</sc>s representation of matrices is capable of dealing with certain matrix operations that, although mathematically valid, are not handled correctly by existing libraries. O<sc>O</sc>L<sc>A</sc>L<sc>A</sc> also enables implementations of matrix calculations at various abstraction levels ranging from the relatively low-level abstraction of a Fortran BLAS-like implementation to higher-level abstractions that hide many implementation details. O<sc>O</sc>L<sc>A</sc>L<sc>A</sc> addresses a wide range of numerical linear algebra functionality while the reviewed object oriented libraries concen trate on parts of such functionality. We include some preliminary performance results for a Java implementation of O<sc>O</sc>L<sc>A</sc>L<sc>A</sc>.
Proceedings of the 2002 joint ACM-ISCOPE conference on Java Grande | 2002
Mikel Luján; John R. Gurd; T. L. Freeman; José Miguel
The Java language specification states that every access to an array needs to be within the bounds of that array; i.e. between 0 and array length 1. Different techniques for different programming languages have been proposed to eliminate explicit bounds checks. Some of these techniques are implemented in off-the-shelf Java Virtual Machines (JVMs). The underlying principle of these techniques is that bounds checks can be removed when a JVM/compiler has enough information to guarantee that a sequence of accesses (e.g. inside a for-loop) is safe (within the bounds). Most of the techniques for the elimination of array bounds checks have been developed for programming languages that do not support multi-threading and/or enable dynamic class loading. These two characteristics make most of these tech niques unsuitable for Java. Techniques developed specifically for Java have not addressed the elimination of array bounds checks in the presence of indirection, that is, when the index is stored in another array (indirection array). With the objective of optimising applications with array indirection, this paper proposes and evaluates three implementation strategies, each implemented as a Java class. The classes provide the functionality of Java arrays of type int so that objects of the classes can be used instead of indirection arrays. Each strategy enables JVMs, when examining only one of these classes at a time, to obtain enough information to remove array bounds checks.
parallel computing | 1991
T. L. Freeman; Michael K. Bane
Frequent synchronisations have a significant effect on the efficiency of parallel numerical algorithms. In this paper we consider simultaneous polynomial zero-finding algorithms and analyse, both theoretically and numerically, the effect of removing the synchronisation restriction from these algorithms.
parallel computing | 1995
J.M. Bull; T. L. Freeman
Abstract We address the problem of implementing globally adaptive algorithms for multi-dimensional quadrature on parallel computers. By adapting and extending algorithms which we have developed for one-dimensional quadrature we develop algorithms for the multi-dimensional case. The algorithms are targeted at the latest generation of parallel computers, and are therefore independent of the network topology. Numerical results on a Kendall Square Research KSR-1 are reported.
parallel computing | 2000
T. L. Freeman; David Hancock; J. Mark Bull; Rupert W. Ford
In earlier papers ([2], [3], [6]) feedback guided loop scheduling algorithms have been shown to be very effective for certain loop scheduling problems which involve a sequential outer loop and a parallel inner loop and for which the workload of the parallel loop changes only slowly from one execution to the next. In this paper the extension of these ideas the case of nested parallel loops is investigated. We describe four feedback guided algorithms for scheduling nested loops and evaluate the performances of the algorithms on a set of synthetic benchmarks.
international conference on computational science | 2005
Mikel Luján; Anila Usman; Patrick Hardie; T. L. Freeman; John R. Gurd
Many storage formats (or data structures) have been proposed to represent sparse matrices. This paper presents a performance evaluation in Java comparing eight of the most popular formats plus one recently proposed specifically for Java (by Gundersen and Steihaug [6] – Java Sparse Array) using the matrix-vector multiplication operation.
international conference on parallel architectures and languages europe | 1994
J. Mark Bull; T. L. Freeman
A globally adaptive algorithm for approximating one-dimensional definite integrals on parallel computers is described. The algorithm is implemented on a Kendall Square Research KSR-1 parallel computer and numerical results are presented. The algorithm gives significant speedups on a range of hard problems, including ones with singular integrands. A number of alternatives for the interval selection strategy that is at the core of this algorithm are evaluated.
parallel, distributed and network-based processing | 2005
Mikel Luján; Gibson Mukarakate; John R. Gurd; T. L. Freeman
Many object-oriented frameworks have focused on exploiting parallelism on either shared memory or distributed memory machines. This paper presents the design, implementation and preliminary evaluation of a Java-based framework that exploits both forms of parallel machine connected together to form heterogeneous environments. The framework readily supports implementation of problems that can be solved in parallel by a divide-and-conquer approach.
Concurrency and Computation: Practice and Experience | 2005
Mikel Luján; T. L. Freeman; John R. Gurd
OOLALA is an object‐oriented linear algebra library designed to reduce the effort of software development and maintenance. In contrast with traditional (Fortran‐based) libraries, it provides two high abstraction levels that significantly reduce the number of implementations necessary for particular linear algebra operations. Initial performance evaluations of a Java implementation of OOLALA show that the two high abstraction levels are not competitive with the low abstraction level of traditional libraries. These initial performance results motivate the present contribution—the characterization of a set of storage formats (data structures) and matrix properties (special features) for which implementations at the two high abstraction levels can be transformed into implementations at the low (more efficient) abstraction level. Copyright
Proceedings of the 2002 joint ACM-ISCOPE conference on Java Grande | 2002
Mikel Luján; T. L. Freeman; John R. Gurd
OoLaLa is an object oriented linear algebra library designed to reduce development and maintenance effort [2]. In contrast with traditional (Fortran-based) libraries, it provides two high abstraction levels that significantly reduce the combinatorial number of implementations necessary for particular linear algebra operations. Traditional libraries sacrifice abstraction for performance and their implementations of matrix operations have embedded knowledge about the data structures, storage formats (SFs), and special characteristics, matrix properties (MPs). These implementations are said to be at Storage Format Abstraction level (SFA-level). The first higher abstraction level, Matrix Abstraction level (MA-level), provides indexed random access to matrix elements. The second higher abstraction level, Iterator Abstraction level (IA-level), is based on the iterator pattern and provides sequential access to nonzero matrix elements.Performance results of a Java version of OoLaLa show that MA-level and IA-level are not competitive with SFA-level and motivate the two questions addressed in this poster: (1) how can implementations at MA-level and IA-level be transformed into the more efficient SFA-level? and (2) for which sets of SFs and MPs can this be done automatically?The latter question is addressed by the definition of a subset of MPs, linear combination matrix property (LCMP), and a subset of SFs, constant time element access storage format (CTSF). The set LCMP is defined as the subset of MPs such that every MP is based on a boolean expression involving linear combinations of the indices i and j to determine whether an element aij might be a nonzero element. An example of a LCMP is the upper triangular property (i.e. aij is zero when iρj). The set CTSF is defined as the subset of SFs such that matrix elements can be accessed in constant time. An example of an CTSF is the dense format; i.e. a two-dimensional array. The former question is tackled by a sequence of standard compiler transformations which is as follows: (1) method inlining, (2) move the guards1 for the inlined methods to surround the loop (or loops) and include nullity test for every object, (3) remove try-catch and throw clauses, (4) make local copies of the accessed attributes (class and instance variables), (5) disambiguate aliases, (6) remove redundant computations, and (7) transform while-loops into for-loops. Note that outside the scope of this work are loop restructuring transformations for the generation of block or recursive code. This sequence is not only beneficial for OoLaLa, but also for other libraries based on arrays (e.g. the JSR-0832 Array-List, Hash-Map and Hash-Set). Further information can be found in [1].