Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Blair Bethwaite is active.

Publication


Featured researches published by Blair Bethwaite.


Langmuir | 2014

New Insights into the Analysis of the Electrode Kinetics of Flavin Adenine Dinucleotide Redox Center of Glucose Oxidase Immobilized on Carbon Electrodes

Alexandr N. Simonov; Willo Grosse; Elena Mashkina; Blair Bethwaite; Jeff Tan; David Abramson; Gordon G. Wallace; Simon E. Moulton; Alan M. Bond

New insights into electrochemical kinetics of the flavin adenine dinucleotide (FAD) redox center of glucose-oxidase (GlcOx) immobilized on reduced graphene oxide (rGO), single- and multiwalled carbon nanotubes (SW and MWCNT), and combinations of rGO and CNTs have been gained by application of Fourier transformed AC voltammetry (FTACV) and simulations based on a range of models. A satisfactory level of agreement between experiment and theory, and hence establishment of the best model to describe the redox chemistry of FAD, was achieved with the aid of automated e-science tools. Although still not perfect, use of Marcus theory with a very low reorganization energy (≤0.3 eV) best mimics the experimental FTACV data, which suggests that the process is gated as also deduced from analysis of FTACV data obtained at different frequencies. Failure of the simplest models to fully describe the electrode kinetics of the redox center of GlcOx, including those based on the widely employed Laviron theory is demonstrated, as is substantial kinetic heterogeneity of FAD species. Use of a SWCNT support amplifies the kinetic heterogeneity, while a combination of rGO and MWCNT provides a more favorable environment for fast communication between FAD and the electrode.


Archive | 2010

Mixing Grids and Clouds: High-Throughput Science Using the Nimrod Tool Family

Blair Bethwaite; David Abramson; Fabian Bohnert; Slavisa Garic; Colin Enticott; Tom Peachey

The Nimrod tool family facilitates high-throughput science by allowing researchers to explore complex design spaces using computational models. Users are able to describe large experiments in which models are executed across changing input parameters. Different members of the tool family support complete and partial parameter sweeps, numerical search by non-linear optimisation and even workflows. In order to provide timely results and to enable large-scale experiments, distributed computational resources are aggregated to form a logically single high-throughput engine. To date, we have leveraged grid middleware standards to spawn computations on remote machines. Recently, we added an interface to Amazon’s Elastic Compute Cloud (EC2), allowing users to mix conventional grid resources and clouds. A range of schedulers, from round-robin queues to those based on economic budgets, allow Nimrod to mix and match resources. This provides a powerful platform for computational researchers, because they can use a mix of university-level infrastructure and commercial clouds. In particular, the system allows a user to pay money to increase the quality of the research outcomes and to decide exactly how much they want to pay to achieve a given return. In this chapter, we will describe Nimrod and its architecture, and show how this naturally scales to incorporate clouds. We will illustrate the power of the system using a case study and will demonstrate that cloud computing has the potential to enable high-throughput science.


Philosophical Transactions of the Royal Society A | 2010

High-throughput cardiac science on the Grid

David Abramson; Miguel O. Bernabeu; Blair Bethwaite; Kevin Burrage; Alberto Corrias; Colin Enticott; Slavisa Garic; David J. Gavaghan; Tom Peachey; Joe Pitt-Francis; Esther Pueyo; Blanca Rodriguez; Anna Sher; Jefferson Tan

Cardiac electrophysiology is a mature discipline, with the first model of a cardiac cell action potential having been developed in 1962. Current models range from single ion channels, through very complex models of individual cardiac cells, to geometrically and anatomically detailed models of the electrical activity in whole ventricles. A critical issue for model developers is how to choose parameters that allow the model to faithfully reproduce observed physiological effects without over-fitting. In this paper, we discuss the use of a parametric modelling toolkit, called Nimrod, that makes it possible both to explore model behaviour as parameters are changed and also to tune parameters by optimizing model output. Importantly, Nimrod leverages computers on the Grid, accelerating experiments by using available high-performance platforms. We illustrate the use of Nimrod with two case studies, one at the cardiac tissue level and one at the cellular level.


international conference on e-science | 2009

Virtual Microscopy and Analysis Using Scientific Workflows

David Abramson; Blair Bethwaite; Minh Ngoc Dinh; Colin Enticott; Stephen Firth; Slavisa Garic; Ian Steward Harper; Martin Lackmann; Hoang Anh Nguyen; Tirath Ramdas; Abm Russel; Stefan Schek; Mary E. Vail

Most commercial microscopes are stand-alone instruments, controlled by dedicated computer systems. These provide limited storage and processing capabilities. Virtual microscopes, on the other hand, link the image capturing hardware and data analysis software into a wide area network of high performance computers, large storage devices and software systems. In this paper we discuss extensions to Grid workflow engines that allow them to execute scientific experiments on virtual microscopes. We demonstrate the utility of such a system in a biomedical case study concerning the imaging of cancer and antibody based therapeutics.


international conference on conceptual structures | 2010

An abstract virtual instrument system for high throughput automatic microscopy

A. B. M. Russel; David Abramson; Blair Bethwaite; Minh Ngoc Dinh; Colin Enticott; Stephen Firth; Slavisa Garic; Ian Steward Harper; Martin Lackmann; Stefan Schek; Mary E. Vail

Modern biomedical therapies often involve disease specific drug development and may require observing cells at a very high resolution. Existing commercial microscopes behave very much like their traditional counterparts, where a user controls the microscope and chooses the areas of interest manually on a given specimen scan. This mode of discovery is suited to problems where it is easy for a user to draw a conclusion from observations by finding a small number of areas that might require further investigation. However, observations by an expert can be very time consuming and error prone when there are a large number of potential areas of interest (such as cells or vessels in a tumour), and compute intensive image processing is required to analyse them. In this paper, we propose an Abstract Virtual Instrument (AVI) system for accelerating scientific discovery. An AVI system is a novel software architecture for building an hierarchical scientific instrument – one in which a virtual instrument could be defined in terms of other physical instruments, and in which significant processing is required in producing the illusion of a single virtual scientific discovery instrument. We show that an AVI can be implemented using existing scientific workflow tools that both control the microscope and perform image analysis operations. The resulting solution is a flexible and powerful system for performing dynamic high throughput automatic microscopy. We illustrate the system using a case study that involves searching for blood vessels in an optical tissue scan, and automatically instructing the microscope to rescan these vessels at higher resolution.


Frontiers in Computational Neuroscience | 2012

Parametric computation predicts a multiplicative interaction between synaptic strength parameters that control gamma oscillations

Jordan D. Chambers; Blair Bethwaite; Neil Diamond; Tom Peachey; David Abramson; Steven Petrou; Evan A. Thomas

Gamma oscillations are thought to be critical for a number of behavioral functions, they occur in many regions of the brain and through a variety of mechanisms. Fast repetitive bursting (FRB) neurons in layer 2 of the cortex are able to drive gamma oscillations over long periods of time. Even though the oscillation is driven by FRB neurons, strong feedback within the rest of the cortex must modulate properties of the oscillation such as frequency and power. We used a highly detailed model of the cortex to determine how a cohort of 33 parameters controlling synaptic drive might modulate gamma oscillation properties. We were interested in determining not just the effects of parameters individually, but we also wanted to reveal interactions between parameters beyond additive effects. To prevent a combinatorial explosion in parameter combinations that might need to be simulated, we used a fractional factorial design (FFD) that estimated the effects of individual parameters and two parameter interactions. This experiment required only 4096 model runs. We found that the largest effects on both gamma power and frequency came from a complex interaction between efficacy of synaptic connections from layer 2 inhibitory neurons to layer 2 excitatory neurons and the parameter for the reciprocal connection. As well as the effect of the individual parameters determining synaptic efficacy, there was an interaction between these parameters beyond the additive effects of the parameters alone. The magnitude of this effect was similar to that of the individual parameters, predicting that it is physiologically important in setting gamma oscillation properties.


international parallel and distributed processing symposium | 2009

High-throughput protein structure determination using grid computing

Jason W. Schmidberger; Blair Bethwaite; Colin Enticott; Mark A. Bate; Steve G. Androulakis; Noel G. Faux; Cyril Reboul; Jennifer Phan; James C. Whisstock; Wojtek Goscinski; Slavisa Garic; David Abramson; Ashley M. Buckle

Determining the X-ray crystallographic structures of proteins using the technique of molecular replacement (MR) can be a time and labor-intensive trial-and-error process, involving evaluating tens to hundreds of possible solutions to this complex 3D jigsaw puzzle. For challenging cases indicators of success often do not appear until the later stages of structure refinement, meaning that weeks or even months could be wasted evaluating MR solutions that resist refinement and do not lead to a final structure. In order to improve the chances of success as well as decrease this timeframe, we have developed a novel grid computing approach that performs many MR calculations in parallel, speeding up the process of structure determination from weeks to hours. This high-throughput approach also allows parameter sweeps to be performed in parallel, improving the chances of MR success.


ieee international conference on escience | 2008

Grid Interoperability: An Experiment in Bridging Grid Islands

Blair Bethwaite; David Abramson; Ashley M. Buckle

In the past decade Grid computing has matured considerably. A number of groups have built, operated, and expanded large testbed and production Grids. These Grids have inevitably been designed to meet the needs of a limited set of initial stakeholders, resulting in varying and sometimes ad-hoc specifications. As the use of e-Science becomes more common, this inconsistency is increasingly problematic for the growing set of applications requiring more resources than a single Grid can offer, as spanning these Grid islands is far from trivial. Thus, Grid interoperability is attracting much interest as researchers try to build bridges between separate Grids. Recently we ran a case study that tested interoperation between several Grids, during which we recorded and classified the issues that arose. In this paper we provide empirical evidence supporting existing interoperability efforts, and identify current and potential barriers to Grid interoperability.


international conference on big data | 2014

Clustering Experiments on Big Transaction Data for Market Segmentation

Ashishkumar Singh; Grace W. Rumantir; Annie South; Blair Bethwaite

This paper addresses the Volume dimension of Big Data. It presents a preliminary work on finding segments of retailers from a large amount of Electronic Funds Transfer at Point Of Sale (EFTPOS) transaction data. To the best of our knowledge, this is the first time a work on Big EFTPOS Data problem has been reported. A data reduction technique using the RFM (Recency, Frequency, Monetary) analysis as applied to a large data set is presented. Ways to optimise clustering techniques used to segment the big data set through data partitioning and parallelization are explained. Preliminary analysis on the segments of the retailers output from the clustering experiments demonstrates that further drilling down into the retailer segments to find more insights into their business behaviours is warranted.


international conference on parallel processing | 2011

Integrating Scientific Workflows and Large Tiled Display Walls: Bridging the Visualization Divide

Hoang Nguyen; David Abramson; Blair Bethwaite; Minh Ngoc Dinh; Colin Enticott; Slavisa Garic; A. B. M. Russel; Stephen Firth; Ian Steward Harper; Martin Lackmann; Mary E. Vail; Stefan Schek

Modern in-silico science (or e-Science) is a complex process, often involving multiple steps conducted across different computing environments. Scientific workflow tools help scientists automate, manage and execute these steps, providing a robust and repeatable research environment. Increasingly workflows generate data sets that require scientific visualization, using a range of display devices such as local workstations, immersive 3D caves and large display walls. Traditionally, this display step handled outside the workflow, and output files are manually copied to a suitable visualization engine for display. This inhibits the scientific discovery process disconnecting the workflow that generated the data from the display and interpretation processes. In this paper we present a solution that links scientific workflows with a variety of display devises, including large tiled display walls. We demonstrate the feasibility of the system by a prototype implementation that leverages the Kepler workflow engine and the SAGE display software. We illustrate the use of the system with a case study in workflow driven microscopy.

Collaboration


Dive into the Blair Bethwaite's collaboration.

Top Co-Authors

Avatar

David Abramson

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge