Kenneth A. Hawick
University of Hull
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenneth A. Hawick.
parallel computing | 2010
Kenneth A. Hawick; Arno Leist; Daniel P. Playne
Graph component labelling, which is a subset of the general graph colouring problem, is a computationally expensive operation that is of importance in many applications and simulations. A number of data-parallel algorithmic variations to the component labelling problem are possible and we explore their use with general purpose graphical processing units (GPGPUs) and with the CUDA GPU programming language. We discuss implementation issues and performance results on GPUs using CUDA. We present results for regular mesh graphs as well as arbitrary structured and topical graphs such as small-world and scale-free structures. We show how different algorithmic variations can be used to best effect depending upon the cluster structure of the graph being labelled and consider how features of the GPU architectures and host CPUs can be combined to best effect into a cluster component labelling algorithm for use in high performance simulations.
Proceedings of the ACM 1999 conference on Java Grande | 1999
J. A. Mathew; Paul D. Coddington; Kenneth A. Hawick
Understanding the performance behaviour of Java Virtual Machines is important to Java systems developers and to applications developers. We review a range of Java benchmark programs, including those collected by the Java Grande Forum as useful performance indicators for large-scale scientific applications, or “Java Grande applications”. We systematically analyse these benchmarks on a collection of compute platforms including Pentium II, UltraSPARC II, Silicon Graphics, iMac and Alpha, and operating systems including Windows NT, Solaris, and Linux, and consider differences in performance between benchmarks running on the Java Development Kit (JDK) versions 1.1.6 and 1.2. We also analyse some benchmark programs we have developed to augment existing collections. We observe some general trends in the performance of Java on various platforms and also conduct some comparisons between benchmarks written in Java and in traditional languages such as C and Fortran. We discuss performance variations across systems and the implications of Java systems performance for those developing scientific applications.
Future Generation Computer Systems | 1999
Kenneth A. Hawick; Heath A. James; A. J. Silis; Duncan A. Grove; Craig J. Patten; J. A. Mathew; Paul D. Coddington; K. E. Kerry; J. F. Hercus; F. A. Vaughan
Abstract We describe our DISCWorld system for wide-area, high-performance metacomputing in which we adopt a high-level, service-based approach. Users’ client programs request combinations of services from a set of server nodes which communicate at a peer-based level. DISCWorld is a constrained metacomputing system, running only the service operations its participating resource administrators have chosen to provide and advertise, and provides a common integration environment for clients to access these services and developers to make them available. We discuss our software architecture and experiences building DISCWorld using Java and CORBA components, and the associated research issues for metacomputing that we are addressing.
ieee international conference on high performance computing data and analytics | 1998
K. E. Kerry; Kenneth A. Hawick
We discuss Kriging Interpolation on high-performance computers as a method for spatial data interpolation. We analyse algorithms for implementing Kriging on high-performance and distributed architectures. In addition to a number of test problems, we focus on an application of comparing rainfall measurements with satellite imagery. We discuss our hardware and software system and the resulting performance on the basis of the Kriging algorithm complexity. We also discuss our results in relation to selection of an appropriate execution target according to the data parameter .sixes. We consider the implications of providing computational servers for processing data using the data interpolation rnethod we describe. We describe the project context for this work, which involves prototyping a data processing and delivery system making use of on-line, data archives and processing services made available, on-demand using World Wide Web protocols.
international parallel and distributed processing symposium | 2001
Omer Farooq Rana; Daniel Bunford-Jones; David W. Walker; Matthew Addis; Mike Surridge; Kenneth A. Hawick
We describe a de-centralised approach to resource management and discovery, based on a community of interacting software agents. Each agent either represents a user application, a resource, or a MatchMaking service. The proposed approach can support dynamic registration of resources and user tasks, facilitating the establishment of dynamic clusters. Resource capability and task requirements are described using an object based data model, enabling new types of devices or new features in existing devices to be identified. A comparison with the Discovery and LookUp services in Jini and TSpaces is also provided.
International Journal of Parallel Programming | 2011
Kenneth A. Hawick; Arno Leist; Daniel P. Playne
Data-parallel accelerator devices such as Graphical Processing Units (GPUs) are providing dramatic performance improvements over even multi-core CPUs for lattice-oriented applications in computational physics. Models such as the Ising and Potts models continue to play a role in investigating phase transitions on small-world and scale-free graph structures. These models are particularly well-suited to the performance gains possible using GPUs and relatively high-level device programming languages such as NVIDIA’s Compute Unified Device Architecture (CUDA). We report on algorithms and CUDA data-parallel programming techniques for implementing Metropolis Monte Carlo updates for the Ising model using bit-packing storage, and adjacency neighbour lists for various graph structures in addition to regular hypercubic lattices. We report on parallel performance gains and also memory and performance tradeoffs using GPU/CPU and algorithmic combinations.
conference on high performance computing (supercomputing) | 1997
Kenneth A. Hawick; Heath A. James
We describe distributed and parallel algorithms for processing remotely sensed data such as geostationary satellite imagery. We have built a distributed data repository based around the client-server computing model across wide-area ATM networks, with embedded parallel and high performance processing modules. We focus on algorithms for classification, georectification, correlation and histogram analysis of the data. We consider characteristics of image data collected from the Japanese GMS5 geostationary meteorological satellite, and some analysis techniques we have applied to it. As well as providing a browsing interface to our data collection, our system provides processing and analysis services on-demand.
technology of object oriented languages and systems | 1998
Paul D. Coddington; Kenneth A. Hawick; K. E. Kerry; J. A. Mathew; A. J. Silis; Darren Webb; P. J. Whitbread; C. G. Irving; M. W. Grigg; R. Jana; K. Tang
We have implemented a prototype distributed system for managing and accessing a digital library of geospatial imagery over a wide-area network. The system conforms to a subset of the Geospatial and Imagery Access Services (GIAS) specification from the U.S. National Imagery and Mapping Agency (NIMA), which defines an object-oriented application programming interface (API) using the Common Object Request Broker Architecture (CORBA) for remote access to the image server. The GIAS specification is being explored by the military in both the U.S. and Australia as a means for creating widely accessible imagery repositories, and also provides a convenient API for interfacing to repositories of geospatial images, such as satellite data archives, for a variety of commercial and research applications. Our prototype GIAS implementation was developed using StudioCentral from Silicon Graphics Inc., which provides a set of C++ class libraries for building digital multimedia repositories. We discuss the issues and problems involved in developing this system using CORBA, Java and C++ native methods, within the constraints of the GIAS specification.
ieee international conference on high performance computing data and analytics | 1997
Kenneth A. Hawick; Heath A. James; Kevin Maciunas; Francis Vaughan; Andrew L. Wendelborn; M. Buchhorn; M. Rezny; S. R. Taylor; M. D. Wilson
We present a distributed geographic information system (DGIS) built on a distributed high performance computing environment[1] using a number of software infrastructural building blocks and computational resources interconnected by an ATM-based broadband network. Archiving, access and processing of scientific data are discussed in the context of geographic and environmental applications with special emphasis on the potential for local-area weather, agriculture, soil and land management products. In particular, we discuss the capabilities of a distributed high-performance environment incorporating: high bandwidth communications networks such as Telstras Experimental Broadband Network (EBN)[3]; large capacity hierarchical storage systems; and high performance parallel computing resources.
computational science and engineering | 1995
Shirley Browne; Jack J. Dongarra; Stan Green; Keith Moore; Tom Rowan; Reed Wade; Geoffrey C. Fox; Kenneth A. Hawick; Ken Kennedy; J. Pool; R. Stevens; B. Ogson; T. Disz
Helping the high-performance computing and communications (HPCC) community to share software and information, the National HPCC Software Exchange (NHSE) is an Internet-accessible resource that promotes the exchange of software and information among those involved with HPCC. Now in its infancy, the NHSE will link varied discipline-oriented repositories of software and documents, and encourage Grand Challenge teams and other members of the HPCC community to contribute to these repositories and use them. By acting as a national online library of software that makes widely distributed materials available through one place, the exchange will cut down the amount of time, talent and money spent reinventing the wheel. The target audiences for the NHSE include scientists and engineers in diverse HPCC application fields, computer scientists, users of government and academic supercomputer centers, and industrial users. >