Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin Enticott is active.

Publication


Featured researches published by Colin Enticott.


ieee international conference on high performance computing data and analytics | 2008

Nimrod/K: towards massively parallel dynamic grid workflows

David Abramson; Colin Enticott; Ilkay Altinas

A challenge for Grid computing is the difficulty in developing software that is parallel, distributed and highly dynamic. Whilst there have been many general purpose mechanisms developed over the years, Grid programming still remains a low level, error prone task. Scientific workflow engines can double as programming environments, and allow a user to compose dasiavirtualpsila Grid applications from pre-existing components. Whilst existing workflow engines can specify arbitrary parallel programs, (where components use message passing) they are typically not effective with large and variable parallelism. Here we discuss dynamic dataflow, originally developed for parallel tagged dataflow architectures (TDAs), and show that these can be used for implementing Grid workflows. TDAs spawn parallel threads dynamically without additional programming. We have added TDAs to Kepler, and show that the system can orchestrate workflows that have large amounts of variable parallelism. We demonstrate the system using case studies in chemistry and in cardiac modelling.


New Generation Computing | 2004

Parameter scan of an effective group difference pseudopotential using grid computing

Wibke Sudholt; Kim K. Baldridge; David Abramson; Colin Enticott; Slavisa Garic

Computational modeling in the health sciences is still very challenging and much of the success has been despite the difficulties involved in integrating all of the technologies, software, and other tools necessary to answer complex questions. Very large-scale problems are open to questions of spatio-temporal scale, and whether physico-chemical complexity is matched by biological complexity. For example, for many reasons, many large-scale biomedical computations today still tend to use rather simplified physics/chemistry compared with the state of knowledge of the actual biology/biochemistry. The ability to invoke modern grid technologies offers the ability to create new paradigms for computing, enabling access of resources which facilitate spanning the biological scale.


cluster computing and the grid | 2006

The PRAGMA Testbed - Building a Multi-Application International Grid

Cindy Zheng; David Abramson; Peter W. Arzberger; Shahaan Ayyub; Colin Enticott; Slavisa Garic; Mason J. Katz; Jae-Hyuck Kwak; Bu-Sung Lee; Philip M. Papadopoulos; Sugree Phatanapherom; Somsak Sriprayoonsakul; Yoshio Tanaka; Yusuke Tanimura; Osamu Tatebe; Putchong Uthayopas

This practices and experience paper describes the coordination, design, implementation, availability, and performance of the Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) Grid Testbed. Applications in high-energy physics, genome annotation, quantum computational chemistry, wildfire simulation, and protein sequence alignment have driven the middleware requirements, and the testbed provides a mechanism for international users to share software beyond the essential, de facto standard Globus core. In this paper, we describe how human factors, resource availability and performance issues have affected the middleware, applications and the testbed design. We also describe how middleware components in grid monitoring, grid accounting, grid Remote Procedure Calls, grid-aware file systems, and grid-based optimization have dealt with some of the major characteristics of our testbed. We also briefly describe a number of mechanisms that we have employed to make software more easily available to testbed administrators.


international conference on e-science | 2009

Scheduling Multiple Parameter Sweep Workflow Instances on the Grid

Sucha Smanchat; Maria Indrawan; Sea Ling; Colin Enticott; David Abramson

Due to its ability to provide high-performance computing environment, the grid has become an important infrastructure to support eScience. To utilise the grid for parameter sweep experiments, workflow technology combined with tools such as Nimrod/K are used to orchestrate and automate scientific services provided on the grid. As parameter sweeping over a workflow needs to be executed numerous times, it is more efficient to execute multiple instances of the workflow in parallel. However, this parallel execution can be delayed as every workflow instance requires the same set of resources leading to resource competition problem. Although many algorithms exist for scheduling grid workflows, there is little effort in considering multiple workflow instances and resource competition in the scheduling process. In this paper, we proposed a scheduling algorithm for parameter sweep workflow based on resource competition. The proposed algorithm aims to support multiple workflow instances and avoid allocating resources with high resource competition to minimise delay due to the blocking of tasks. The result is evaluated using simulation to compare with an existing scheduling algorithm.


Archive | 2010

Mixing Grids and Clouds: High-Throughput Science Using the Nimrod Tool Family

Blair Bethwaite; David Abramson; Fabian Bohnert; Slavisa Garic; Colin Enticott; Tom Peachey

The Nimrod tool family facilitates high-throughput science by allowing researchers to explore complex design spaces using computational models. Users are able to describe large experiments in which models are executed across changing input parameters. Different members of the tool family support complete and partial parameter sweeps, numerical search by non-linear optimisation and even workflows. In order to provide timely results and to enable large-scale experiments, distributed computational resources are aggregated to form a logically single high-throughput engine. To date, we have leveraged grid middleware standards to spawn computations on remote machines. Recently, we added an interface to Amazon’s Elastic Compute Cloud (EC2), allowing users to mix conventional grid resources and clouds. A range of schedulers, from round-robin queues to those based on economic budgets, allow Nimrod to mix and match resources. This provides a powerful platform for computational researchers, because they can use a mix of university-level infrastructure and commercial clouds. In particular, the system allows a user to pay money to increase the quality of the research outcomes and to decide exactly how much they want to pay to achieve a given return. In this chapter, we will describe Nimrod and its architecture, and show how this naturally scales to incorporate clouds. We will illustrate the power of the system using a case study and will demonstrate that cloud computing has the potential to enable high-throughput science.


grid computing | 2005

Bridging organizational network boundaries on the grid

Jefferson Tan; David Abramson; Colin Enticott

The grid offers significant opportunities for performing wide area distributed computing, allowing multiple organizations to collaborate and build dynamic and flexible virtual organisations. However, existing security firewalls often diminish the level of collaboration that is possible, and current grid middleware often assumes that there are no restrictions on the type of communication that is allowed. Accordingly, a number of collaborations have failed because the member sites have different and conflicting security policies. In this paper we present an architecture that facilitates inter-organization communication using existing grid middleware, without compromising the security policies in place at each of the participating sites. Our solutions are built on a number of standard secure communication protocols such as SSH and SOCKS. We call this architecture Remus, and will demonstrate its effectiveness using the Nimrod/G tools.


Philosophical Transactions of the Royal Society A | 2011

Leveraging e-Science infrastructure for electrochemical research

Tom Peachey; Elena Mashkina; Chong-Yong Lee; Colin Enticott; David Abramson; Alan M. Bond; Darrell Elton; David J. Gavaghan; Gareth P. Stevenson; Gareth F. Kennedy

As in many scientific disciplines, modern chemistry involves a mix of experimentation and computer-supported theory. Historically, these skills have been provided by different groups, and range from traditional ‘wet’ laboratory science to advanced numerical simulation. Increasingly, progress is made by global collaborations, in which new theory may be developed in one part of the world and applied and tested in the laboratory elsewhere. e-Science, or cyber-infrastructure, underpins such collaborations by providing a unified platform for accessing scientific instruments, computers and data archives, and collaboration tools. In this paper we discuss the application of advanced e-Science software tools to electrochemistry research performed in three different laboratories – two at Monash University in Australia and one at the University of Oxford in the UK. We show that software tools that were originally developed for a range of application domains can be applied to electrochemical problems, in particular Fourier voltammetry. Moreover, we show that, by replacing ad-hoc manual processes with e-Science tools, we obtain more accurate solutions automatically.


Future Generation Computer Systems | 2013

Scheduling parameter sweep workflow in the Grid based on resource competition

Sucha Smanchat; Maria Indrawan; Sea Ling; Colin Enticott; David Abramson

Workflow technology has been adopted in scientific domains to orchestrate and automate scientific processes in order to facilitate experimentation. Such scientific workflows often involve large data sets and intensive computation that necessitate the use of the Grid. To execute a scientific workflow in the Grid, tasks within the workflow are assigned to Grid resources. Thus, to ensure efficient execution of the workflow, Grid workflow scheduling is required to manage the allocation of Grid resources. Although many Grid workflow scheduling techniques exist, they are mainly designed for the execution of a single workflow. This is not the case with parameter sweep workflows, which are used for parametric study and optimisation. A parameter sweep workflow is executed numerous times with different input parameters in order to determine the effect of each parameter combination on the experiment. While executing multiple instances of a parameter sweep workflow in parallel can reduce the time required for the overall execution, this parallel execution introduces new challenges to Grid workflow scheduling. Not only is a scheduling algorithm that is able to manage multiple workflow instances required, but this algorithm also needs the ability to schedule tasks across multiple workflow instances judiciously, as tasks may require the same set of Grid resources. Without appropriate resource allocation, resource competition problem could arise. We propose a new Grid workflow scheduling technique for parameter sweep workflow called the Besom scheduling algorithm. The scheduling decision of our algorithm is based on the resource dependencies of tasks in the workflow, as well as conventional Grid resource-performance metrics. In addition, the proposed technique is extended to handle loop structures in scientific workflows without using existing loop-unrolling techniques. The Besom algorithm is evaluated using simulations with a variety of scenarios. A comparison between the simulation results of the Besom algorithm and of the three existing Grid workflow scheduling algorithms shows that the Besom algorithm is able to perform better than the existing algorithms for workflows that have complex structures and that involve overlapping resource dependencies of tasks.


cluster computing and the grid | 2007

Executing Large Parameter Sweep Applications on a Multi-VO Testbed

Shahaan Ayyub; David Abramson; Colin Enticott; Slavisa Garic; Jefferson Tan

Applications that span multiple virtual organizations (VOs) are of great interest to the eScience community. However, recent attempts to execute large-scale parameter sweep applications (PSAs) with the Nimrod/G tool have exposed problems in the areas of fault tolerance, data storage and trust management. In response, we have implemented a task-splitting approach, which breaks up large PSAs into a sequence of dependent subtasks, improving fault tolerance; provides a garbage collection technique, which deletes unnecessary data; and employs a trust delegation technique that facilitates flexible third party data transfers across different VOs.


Philosophical Transactions of the Royal Society A | 2010

High-throughput cardiac science on the Grid

David Abramson; Miguel O. Bernabeu; Blair Bethwaite; Kevin Burrage; Alberto Corrias; Colin Enticott; Slavisa Garic; David J. Gavaghan; Tom Peachey; Joe Pitt-Francis; Esther Pueyo; Blanca Rodriguez; Anna Sher; Jefferson Tan

Cardiac electrophysiology is a mature discipline, with the first model of a cardiac cell action potential having been developed in 1962. Current models range from single ion channels, through very complex models of individual cardiac cells, to geometrically and anatomically detailed models of the electrical activity in whole ventricles. A critical issue for model developers is how to choose parameters that allow the model to faithfully reproduce observed physiological effects without over-fitting. In this paper, we discuss the use of a parametric modelling toolkit, called Nimrod, that makes it possible both to explore model behaviour as parameters are changed and also to tune parameters by optimizing model output. Importantly, Nimrod leverages computers on the Grid, accelerating experiments by using available high-performance platforms. We illustrate the use of Nimrod with two case studies, one at the cardiac tissue level and one at the cellular level.

Collaboration


Dive into the Colin Enticott's collaboration.

Top Co-Authors

Avatar

David Abramson

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge