Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heath A. James is active.

Publication


Featured researches published by Heath A. James.


conference on high performance computing (supercomputing) | 1997

Distributed High Performance Computation for Remote Sensing

Kenneth A. Hawick; Heath A. James

We describe distributed and parallel algorithms for processing remotely sensed data such as geostationary satellite imagery. We have built a distributed data repository based around the client-server computing model across wide-area ATM networks, with embedded parallel and high performance processing modules. We focus on algorithms for classification, georectification, correlation and histogram analysis of the data. We consider characteristics of image data collected from the Japanese GMS5 geostationary meteorological satellite, and some analysis techniques we have applied to it. As well as providing a browsing interface to our data collection, our system provides processing and analysis services on-demand.


ieee international conference on high performance computing data and analytics | 1997

Geographic Information Systems Application on an ATM-based Distributed High Performance Computing System

Kenneth A. Hawick; Heath A. James; Kevin Maciunas; Francis Vaughan; Andrew L. Wendelborn; M. Buchhorn; M. Rezny; S. R. Taylor; M. D. Wilson

We present a distributed geographic information system (DGIS) built on a distributed high performance computing environment[1] using a number of software infrastructural building blocks and computational resources interconnected by an ATM-based broadband network. Archiving, access and processing of scientific data are discussed in the context of geographic and environmental applications with special emphasis on the potential for local-area weather, agriculture, soil and land management products. In particular, we discuss the capabilities of a distributed high-performance environment incorporating: high bandwidth communications networks such as Telstras Experimental Broadband Network (EBN)[3]; large capacity hierarchical storage systems; and high performance parallel computing resources.


ieee international conference on high performance computing data and analytics | 1998

DISCWorld: A Distributed High Performance Computing Environment

Kenneth A. Hawick; Heath A. James; Craig J. Patten; Francis Vaughan

An increasing number of science and engineering applications require distributed and parallel computing resources to satisfy user response-time requirements. Distributed science and engineering applications require a high performance “middleware” which will both allow the embedding of legacy applications as well as enable new distributed programs, and which allows the best use of existing and specialised (parallel) computing resources. We are developing a distributed information systems control environment which will meet the needs of a middleware for scientific applications. We describe our DISCWorld system and some of its key attributes. A critical attribute is architecture scalability. We discuss DISCWorld in the context of some existing middleware systems such as CORBA and other distributed computing research systems such as Legion and Globus. Our approach is to embed applications in the middleware as services, which can be chained together. User interfaces are provided in the form of Java Applets downloadable across the World Wide Web. These form a gateway for user-requests to be transmitted into a semi-opaque “cloud” of high-performance resources for distributed execution.


australian conference on artificial life | 2007

Structural circuits and attractors in Kauffman networks

Kenneth A. Hawick; Heath A. James; Chris Scogings

There has been some ambiguity about the growth of attractors in Kauffman networks with network size. Some recent work has linked this to the role and growth of circuits or loops of boolean variables. Using numerical methods we have investigated the growth of structural circuits in Kauffman networks and suggest that the exponential growth in the number of structural circuits places a lower bound on the complexity of the growth of boolean dependency loops and hence of the number of attractors. We use a fast and exact circuit enumeration method that does not rely on sampling trajectories. We also explore the role of structural self-edges, or self-inputs in the NK-model, and how they affect the number of structural circuits and hence of attractors.


ieee international conference on high performance computing data and analytics | 2000

A Java-Based Parallel Programming Support Environment

Kenneth A. Hawick; Heath A. James

We have prototyped a multi-paradigm parallel programming toolkit in Java, specifically targeting an integrated approach on cluster computers. Our JUMP system builds on ideas from the message-passing community as well as from distributed systems technologies. The ever-improving Java development environment allows us access to a number of techniques that were not available using the message-passing systems of the past. In addition to the usual object-oriented programming benefits, these include: language reflection; a rich variety of remote and networking techniques; dynamic class-loading; and code portability. We are using our JUMP model framework to research some of the long sought after parallel programming goals of support for parallel I/O; irregular and dynamic domain decomposition and in particular irregular mesh support. Our system supports the usual messaging primitives although in a more natural style for a modern object oriented program.


Proceedings of the ACM 2000 conference on Java Grande | 2000

Development routes for message passing parallelism in Java

J. A. Mathew; Heath A. James; Kenneth A. Hawick

Java is an attractive environment for writing portable mes- sage passing parallel programs. Considerable work in mes- sage passing interface bindings for the C and Fortran lan- guages has been done. We show how this work can be reused and bindings for Java developed. We have built a Pure J ava Message Passing Implementation (PJMPI) that is strongly compatible with the MPI standard. Conversely, the impera- tive programming style bindings are not entirely appropriate for the Java programming style and we have therefore also developed a less compatible system, known as JUMP, that enables many of the message passing parallel technological ideas but in a way that we believe will be more appropriate to the style of Java programs. JUMP is also intended as a development platform for many of our higher level ideas in parallel programming and parallel paradigms that MPI en- ables but does not directly implement. We review ongoing attempts at resolving this present crisis in reconciling Java and MPI. We have looked at some of the more advanced Java technologies, specifically Jiui and JavaSpaces, which may contribute to Java message passing, but have found the performance of these to be somewhat deficient at the time of writing. We have therefore designed JUMP to be independent of Jiui and JavaSpaces at present although use of these technologies may be strongly desirable. We describe the ClassLoading problem and other techniques we have em- ployed in JUMP to enable a pure Java message passing sys- tem suitable for use on local and remote clusters amongst other parallel computing platforms.


hawaii international conference on system sciences | 2001

An environment for workflow applications on wide-area distributed systems

Heath A. James; Kenneth A. Hawick; Paul D. Coddington

Workflow techniques are emerging as an important approach for the specification and management of complex processing tasks. This approach is especially powerful for utilising distributed data and processing resources in widely-distributed heterogeneous systems. We describe our DISCWorld distributed workflow environment for composing complex processing chains, which are specified as a directed acyclic graph of operators. Users of our system can formulate processing chains using either graphical or scripting tools. We have deployed our system for image processing applications and decision support systems. We describe the technologies we have developed to enable the execution of these processing chains across wide-area computing systems. In particular, we present our Distributed Job Placement Language (based on XML) and various Java interface approaches we have developed for implementing the workflow metaphor. We outline a number of key issues for implementing a high-performance, reliable, distributed workflow management system.


foundations of computer science | 2001

Dynamic cluster configuration and management using JavaSpaces

Kenneth A. Hawick; Heath A. James

Managing dynamic clusters that grow or shrink in their numbers of node membersis a challenging problem. We discuss the issues and approaches to this problem and report a simple model for scheduling tasks on a dynamic cluster. We show how this model can be implemented as a configuration management system using the features and capabilities of the Java language and development environment. We describe our experiences in constructing a dynamic cluster control system using the Java, Jini and JavaSpaces technologies. We report on our use of Join and Depart space tuple entries and use of the leases mechanism to partially address the problem of node failure.


ieee international conference on high performance computing data and analytics | 1997

Geostationary-satellite imagery applications on distributed, high-performance computing

Kenneth A. Hawick; Heath A. James; Kevin Maciunas; Francis Vaughan; Andrew L. Wendelborn; M. Buchhorn; M. Rezny; S. R. Taylor; M. D. Wilson

We discuss applications of high resolution geostationary satellite imagery and distributed high performance computing facilities for the storage, processing and delivery of satellite data products. We describe our system which is built on a distributed high performance computing environment using a number of software infrastructural building blocks and computational resources interconnected by an ATM based broadband network. Distributed high performance computing hardware technology underpins our proposed system. In particular we discuss the capabilities of a distributed hardware environment incorporating: high bandwidth communications networks such as Telstras Experimental Broadband Network (EBN); large capacity hierarchical storage systems; and high performance parallel computing resources. We also describe a recent demonstration of our project resources to the remote sensing user community.


ieee international conference on high performance computing data and analytics | 1997

An ATM-based Distributed High Performance Computing System

Kenneth A. Hawick; Heath A. James; Kevin Maciunas; Francis Vaughan; Andrew L. Wendelborn; M. Buchhorn; M. Rezny; S. R. Taylor; M. D. Wilson

We describe the distributed high performance computing system we have developed to integrate together a heterogeneous set of high performance computers, high capacity storage systems and fast communications hardware. Our system is based upon Asynchronous Transfer Mode (ATM) communications technology and we routinely operate between the geographically distant sites of Adelaide and Canberra (separated by some 1100km) using Telstras ATM-based Experimental Broadband Network (EBN). We discuss some of the latency and performance issues that result from running day-to-day operations across such a long distance network.

Collaboration


Dive into the Heath A. James's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Buchhorn

Australian National University

View shared research outputs
Top Co-Authors

Avatar

M. D. Wilson

Australian National University

View shared research outputs
Top Co-Authors

Avatar

M. Rezny

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge