David G. Cameron
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David G. Cameron.
ieee international conference on high performance computing data and analytics | 2003
William H. Bell; David G. Cameron; A. Paul Millar; Luigi Capozza; Kurt Stockinger; Floriano Zini
Computational grids process large, computationally intensive problems on small data sets. In contrast, data grids process large computational problems that in turn require evaluating, mining and producing large amounts of data. Replication, creating geographically disparate identical copies of data, is regarded as one of the major optimization techniques for reducing data access costs. In this paper, several replication algorithms are discussed. These algorithms were studied using the Grid simulator: OptorSim. OptorSim provides a modular framework within which optimization strategies can be studied under different Grid configurations. The goal is to explore the stability and transient behaviour of selected optimization techniques. We detail the design and implementation of OptorSim and analyze various replication algorithms based on different Grid workloads.
cluster computing and the grid | 2003
William H. Bell; David G. Cameron; R. Carvajal-Schiaffino; A. P. Millar; Kurt Stockinger; Floriano Zini
Optimising the use of Grid resources is critical for users to effectively exploit a Data Grid. Data replication is considered a major technique for reducing data access cost to Grid jobs. This paper evaluates a novel replication strategy, based on an economic model, that optimises both the selection of replicas for running jobs and the dynamic creation of replicas in Grid sites. In our model, optimisation agents are located on Grid sites and use an auction protocol for selecting the optimal replica of a data file and a prediction function to make informed decisions about local data replication. We evaluate our replication strategy with OptorSim, a Data Grid simulator developed by the authors. The experiments show that our proposed strategy results in a notable improvement over traditional replication strategies in a Grid environment.
grid computing | 2002
William H. Bell; David G. Cameron; Luigi Capozza; A. Paul Millar; Kurt Stockinger; Floriano Zini
Computational Grids normally deal with large computationally intensive problems on small data sets. In contrast, Data Grids mostly deal with large computational problems that in turn require evaluating and mining large amounts of data. Replication is regarded as one of the major optimisation techniques for providing fast data access.Within this paper, several replication algorithms are studied. This is achieved using the Grid simulator: OptorSim. OptorSim provides a modular framework within which optimisation strategies can be studied under different Grid configurations. The goal is to explore the stability and transient behaviour of selected optimisation techniques.
latin american web congress | 2003
David G. Cameron; R. Carvajal-Schiaffino; A.P. Millar; Caitriana Nicholson; Kurt Stockinger; Floriano Zini
Grid computing is fast emerging as the solution to the problems posed by the massive computational and data handling requirements of many current international scientific projects. Simulation of the grid environment is important to evaluate the impact of potential data handling strategies before being deployed on the grid. We look at the effects of various job scheduling and data replication strategies and compare them in a variety of grid scenarios, evaluating several performance metrics. We use the grid simulator OptorSim, and base our simulations on a world-wide grid testbed for data intensive high energy physics experiments. Our results show that the choice of scheduling and data replication strategies can have a large effect on both job throughput and the overall consumption of grid resources.
Journal of Grid Computing | 2004
David G. Cameron; A. P. Millar; C. Nicholson; R. Carvajal-Schiaffino; Kurt Stockinger; Floriano Zini
AbstractnMany current international scientific projects are based on large scale applications that are both computationally complex and require the management of large amounts of distributed data. Grid computing is fast emerging as the solution to the problems posed by these applications. To evaluate the impact of resource optimisation algorithms, simulation of the Grid environment can be used to achieve important performance results before any algorithms are deployed on the Grid. In this paper, we study the effects of various job scheduling and data replication strategies and compare them in a variety of Grid scenarios using several performance metrics. We use the Grid simulator ntextsf{OptorSim}n, and base our simulations on a world-wide Grid testbed for data intensive high energy physics experiments.nnOur results show that scheduling algorithms which take into account both the file access cost of jobs and the workload of computing resources are the most effective at optimising computing and storage resources as well as improving the job throughput. The results also show that, in most cases, the economy-based replication strategies which we have developed improve the Grid performance under changing network loads.n
Archive | 2005
C. Nicholson; R. Carvajal-Schiaffino; Kurt Stockinger; Paul Millar; Floriano Zini; David G. Cameron
In large-scale Grids, the replication of files to different sites is an important data management mechanism which can reduce access latencies and give improved usage of resources such as network bandwidth, storage and computing power. In the search for an optimal data replication strategy, the Grid simulator OptorSim was developed as part of the European DataGrid project. Simulations of various HEP Grid scenarios have been undertaken using different job scheduling and file replication algorithms, with the experimental emphasis being on physics analysis use-cases. Previously, the CMS Data Challenge 2002 testbed and UK GridPP testbed were among those simulated; recently, our focus has been on the LCG testbed. A novel economybased strategy has been investigated as well as more traditional methods, with the economic models showing distinct advantages for heavily loaded grids.
Journal of Grid Computing | 2004
David G. Cameron; James Casey; Leanne Guy; Peter Z. Kunszt; Sophie Lemaitre; Gavin McCance; Heinz Stockinger; Kurt Stockinger; Giuseppe Andronico; William H. Bell; Itzhak Ben-Akiva; Diana Bosio; Radovan Chytracek; Andrea Domenici; Flavia Donno; Wolfgang Hoschek; Erwin Laure; Levi Lúcio; A. Paul Millar; Livio Salconi; Ben Segal; Mika Silander
Within the European DataGrid project, Work Package 2 has designed and implemented a set of integrated replica management services for use by data intensive scientific applications. These services, based on the web services model, enable movement and replication of data at high speed from one geographical site to another, management of distributed replicated data, optimization of access to data, and the provision of a metadata management tool. In this paper we describe the architecture and implementation of these services and evaluate their performance under demanding Grid conditions.
Open Engineering | 2017
Javier Barranco; Y. Cai; David G. Cameron; Matthew Crouch; Riccardo De Maria; Laurence Field; M. Giovannozzi; Pascal Dominik Hermes; Nils Høimyr; Dobrin Kaltchev; Nikos Karastathis; Cinzia Luzzi; Ewen Hamish Maclean; E McIntosh; Alessio Mereghetti; James Molson; Y. Nosochkov; Ivan D. Reid; Lenny Rivkin; Ben Segal; Kyrre Sjobak; Peter Skands; Claudia Tambasco; Frederik Van der Veken; Igor Zacharov
Abstract The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
Archive | 2002
William H. Bell; David G. Cameron; Luigi Capozza; Paul Millar; Kurt Stockinger; Floriano Zini
Archive | 2003
David G. Cameron; Rubén Carvajal-Schiaffino; A. Paul Millar; Caitriana Nicholson; Kurt Stockinger; Floriano Zini