Tim Hopkins
University of Kent
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tim Hopkins.
Numerische Mathematik | 1980
P. R. Graves-Morris; Tim Hopkins
SummaryA modification of the Thacher-Tukey algorithm for rational interpolation is proposed. The method employed demonstrates the reliability of the proposed algorithm as well as the reliability of the Thacher-Tukey algorithm. Furthermore, the proposed algorithm eliminates almost all the array storage space required to implement the Thacher-Tukey algorithm.
Advances in Computational Mathematics | 1994
Rudnei Dias da Cunha; Tim Hopkins
We describe the parallelisation of the GMRES(c) algorithm and its implementation on distributed-memory architectures, using both networks of transputers and networks of workstations under the PVM message-passing system. The test systems of linear equations considered are those derived from five-point finite-difference discretisations of partial differential equations. A theoretical model of the computation and communication phases is presented which allows us to decide for which values of the parameterc our implementation executes efficiently. The results show that for reasonably large discretisation grids the implementations are effective on a large number of processors.
parallel computing | 1995
Rudnei Dias da Cunha; Tim Hopkins
We present a collection of public-domain Fortran 77 routines for the solution of systems of linear equations using a variety of iterative methods. The routines implement methods which have been modified for their efficient use on parallel architectures with either shared or distributed memory. PIM was designed to be portable across different machines. Results are presented for a variety of parallel computers.
Simulation Modelling Practice and Theory | 2003
David J. Barnes; Tim Hopkins
Abstract We look in detail at an individual-based simulation of the spread of barley yellow dwarf virus. The need for a very large number of individual plants and aphids along with multiple runs using different model parameters mean that it is important to keep memory and processor requirements within reasonable bounds. We present implementations of the model in both imperative and object-oriented programming languages, particularly noting aspects relating to ease of implementation and runtime performance. Finally, we attempt to quantify the cost of some of the decisions made in terms of their memory and processor time requirements.
ACM Transactions on Mathematical Software | 2002
Tim Hopkins
Since 1960 the Association for Computing Machinery has published a series of refereed algorithm implementations known as the Collected Algorithms of the ACM (CALGO). Most of those published since 1975 are mathematical algorithms, and many of them remain useful today. In this paper we describe measures that have been taken to bring some 300 of these latter codes to an up-to-date and consistent state.
Software - Practice and Experience | 1996
Tim Hopkins
We use knot count and path count metrics to identify which routines in the Level 1 basic linear algebra subroutines (BLAS) might benefit from code restructuring. We then consider how logical restructuring and the improvements in the facilities available from successive versions of Fortran have allowed us to improve the complexity of the code as measured by knot count, path count and cyclomatic complexity, and the user interface of one of the identified routines which computes the Euclidean norm of a vector. With these reductions in complexity we hope that we have contributed to improvements in the maintainability and clarity of the code. Software complexity metrics and the control graph are used to quantify and provide a visual guide to the quality of the software, and the performance of two Fortran code restructuring tools is reported. Finally, we give some indication of the cost of the extra numerical robustness offered by the BLAS routine over the use of new Fortran 90 intrinsic functions.
Computer Networks and Isdn Systems | 1998
Mark Russell; Tim Hopkins
By analyzing the log files generated by the UK National Web Cache and by a number of origin FTP sites we provide evidence that an FTP proxy cache with knowledge of local (national) mirror sites could significantly reduce the amount of data that needs to be transferred across already overused networks. We then describe the design and implementation of CFTP, a caching FTP server, and report on its usage over the first 10 months of its deployment. Finally we discuss a number of ways in which the software could be further enhanced to improve both its efficiency and its usability.
Archive | 2000
David J. Barnes; Tim Hopkins
We investigate the evolution of a medium sized software package, LA-PACK, through its public releases over the last six years and establish a correlation, at a subprogram level, between a simply computable software metric value and the number of coding errors detected in the released routines. We also quantify the code changes made between issues of the package and attempt to categorize the reasons for these changes.
Modern software tools for scientific computing | 1997
Tim Hopkins
We begin by using a software metric tool to generate a number of software complexity measures and we investigate how these values may be used to determine subroutines which are likely to be of substandard quality.
ACM Transactions on Mathematical Software | 2002
Tim Hopkins
We report on a number of coding problems that occur frequently in published CALGO software and are still appearing in new algorithm submissions. Using Algorithm 639 as an extended example, we describe how these types of faults may be almost entirely eliminated using available commercial compilers and software tools. We consider the levels of testing required to instil confidence that code performs reliably. Finally, we look at how the source code may be re-engineered, and thus made more maintainable, by taking account of advances in hardware and language development.