Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Armstrong is active.

Publication


Featured researches published by Patrick Armstrong.


Journal of Physics: Conference Series | 2011

A batch system for HEP applications on a distributed IaaS cloud

Ian Gable; A Agarwal; M Anderson; Patrick Armstrong; K Fransham; D Harris C Leavett-Brown; M Paterson; D Penfold-Brown; Randall Sobie; M Vliet; Andre Charbonneau; Roger Impey; Wayne Podaima

The emergence of academic and commercial Infrastructure-as-a-Service (IaaS) clouds is opening access to new resources for the HEP community. In this paper we will describe a system we have developed for creating a single dynamic batch environment spanning multiple IaaS clouds of different types (e.g. Nimbus, OpenNebula, Amazon EC2). A HEP user interacting with the system submits a job description file with a pointer to their VM image. VM images can either be created by users directly or provided to the users. We have created a new software component called Cloud Scheduler that detects waiting jobs and boots the user VM required on any one of the available cloud resources. As the user VMs appear, they are attached to the job queues of a central Condor job scheduler, the job scheduler then submits the jobs to the VMs. The number of VMs available to the user is expanded and contracted dynamically depending on the number of user jobs. We present the motivation and design of the system with particular emphasis on Cloud Scheduler. We show that the system provides the ability to exploit academic and commercial cloud sites in a transparent fashion.


Journal of Physics: Conference Series | 2010

Research computing in a distributed cloud environment

K Fransham; A Agarwal; Patrick Armstrong; A Bishop; Andre Charbonneau; Ronald J. Desmarais; N Hill; Ian Gable; S Gaudet; S Goliath; Roger Impey; Colin Leavett-Brown; J Ouellete; M Paterson; Chris Pritchet; D Penfold-Brown; Wayne Podaima; D Schade; Randall Sobie

The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a users job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

Dynamic web cache publishing for IaaS clouds using Shoal

Ian Gable; Michael Chester; Patrick Armstrong; F. Berghaus; Andre Charbonneau; Colin Leavett-Brown; Michael Paterson; Robert Prior; Randall Sobie; Ryan Taylor

We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache. In this paper, we describe the design and performance of Shoal.


Proceedings of SPIE | 2010

CANFAR: the Canadian Advanced Network for Astronomical Research

Severin J. Gaudet; Norman R. Hill; Patrick Armstrong; Nick Ball; Jeff Burke; Brian Chapel; Ed Chapin; Adrian Damian; Pat Dowler; Ian Gable; Sharon Goliath; Isabella Ghiurea; Sébastien Fabbro; Stephen Gwyn; Dustin Jenkins; J. J. Kavelaars; Brian Major; John Ouellette; M Paterson; Michael T. Peddle; Duncan Penfold-Brown; Chris Pritchet; David Schade; Randall Sobie; David Woods; Alinga Yeung; Yuehai Zhang

The Canadian Advanced Network For Astronomical Research (CANFAR) is a 2 1/2-year project that is delivering a network-enabled platform for the accessing, processing, storage, analysis, and distribution of very large astronomical datasets. The CANFAR infrastructure is being implemented as an International Virtual Observatory Alliance (IVOA) compliant web service infrastructure. A challenging feature of the project is to channel all survey data through Canadian research cyberinfrastructure. Sitting behind the portal service, the internal architecture makes use of high-speed networking, cloud computing, cloud storage, meta-scheduling, provisioning and virtualisation. This paper describes the high-level architecture and the current state of the project.


arXiv: Distributed, Parallel, and Cluster Computing | 2012

Data intensive high energy physics analysis in a distributed cloud

Andre Charbonneau; A Agarwal; M Anderson; Patrick Armstrong; K Fransham; Ian Gable; D Harris; Roger Impey; Colin Leavett-Brown; Michael Paterson; Wayne Podaima; Randall Sobie; M Vliet

We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.


Journal of Physics: Conference Series | 2008

BaBar MC production on the Canadian grid using a web services approach

A Agarwal; Patrick Armstrong; Ronald J. Desmarais; Ian Gable; S Popov; Simon Ramage; S Schaffer; C Sobie; Randall Sobie; T Sulivan; Daniel C. Vanderster; Gabriel Mateescu; Wayne Podaima; Andre Charbonneau; Roger Impey; M Viswanathan; Darcy Quesnel

The present paper highlights the approach used to design and implement a web services based BaBar Monte Carlo (MC) production grid using Globus Toolkit version 4. The grid integrates the resources of two clusters at the University of Victoria, using the ClassAd mechanism provided by the Condor-G metascheduler. Each cluster uses the Portable Batch System (PBS) as its local resource management system (LRMS). Resource brokering is provided by the Condor matchmaking process, whereby the job and resource attributes are expressed as ClassAds. The important features of the grid are automatic registering of resource ClassAds to the central registry, ClassAds extraction from the registry to the metascheduler for matchmaking, and the incorporation of input/output file staging. Web-based monitoring is employed to track the status of grid resources and the jobs for an efficient operation of the grid. The performance of this new grid for BaBar jobs, and the existing Canadian computational grid (GridX1) based on Globus Toolkit version 2 is found to be consistent.


high performance computing systems and applications | 2007

The GridX1 computational Grid: from a set of service-specific protocols to a service-oriented approach

Gabriel Mateescu; Wayne Podaima; Andre Charbonneau; Roger Impey; Meera Viswanathan; A Agarwal; Patrick Armstrong; Ronald J. Desmarais; Ian Gable; Sergey Popov; Simon Ramage; Randall Sobie; Daniel C. Vanderster; Darcy Quesnel

GridXl is a computational grid designed and built to link resources at a number of research institutions across Canada. Building upon the experience of designing, deploying and operating the first generation of GridXl, we have designed a second-generation, Web-services-based, computational grid. The second generation of GridXl leverages the Web services resource framework, implemented by the Globus Toolkit version 4. The value added by GridXl includes metascheduling, file staging, resource registry and resource monitoring.


international conference on geoinformatics | 2009

Geospatial data grid with computational capability

Hao Chen; David G. Goodenough; Aimin Guan; Andre Charbonneau; Roger Impey; Wayne Podaima; Randall Sobie; A Agarwal; Belaid Moa; Ian Gable; Patrick Armstrong; Ronald J. Desmarais

Grid-computing technology is used to solve large or complex computational problems on distributed computational resources belonging to multiple organizations regardless of their geographical locations. In this paper, we introduce an integration of the SAFORAH geospatial data grid, provided by the Canadian Forest Service, and the computational grids, offered by the University of Victoria and National Research Council Canada, using Globus Toolkit 4 (GT-4) as a common grid middleware. New GT-4 services, such as an Integration Service (IS), a Metascheduler Service (MS) and a Registry Service (RS) were designed and implemented. MS is a broker used to schedule jobs on many distinct computational clusters at the project partner sites. The RS is used to track the availability of resources at these sites. IS allows the data grid and computational grids to work together, so that users are able not only to access geospatial data directly from the SAFORAH data grid but also to create virtual image products on the fly using the computational grids. The new system is a prime example of sharing data and computational resources and collaborating between different organization and research communities. We envisage that the results of our system would be valuable to other projects that wish to combine large-scale data management systems with computational resources.


ieee international conference on high performance computing data and analytics | 2009

Service-oriented grid computing for SAFORAH

A Agarwal; Patrick Armstrong; Andre Charbonneau; Hao Chen; Ronald J. Desmarais; Ian Gable; David G. Goodenough; Aimin Guan; Roger Impey; Belaid Moa; Wayne Podaima; Randall Sobie

The SAFORAH project (System of Agents for Forest Observation Research with Advanced Hierarchies) was created to coordinate and streamline the archiving and sharing of large geospatial data sets between various research groups within the Canadian Forest Service, the University of Victoria, and various other academic and government partners. Recently, it has become apparent that the availability of image processing services would improve the utility of the SAFORAH system. We describe a project to integrate SAFORAH with a computational grid using the Globus middleware. We outline a modular design that will allow us to incorporate new components as well as enhance the long-term sustainability of the project. We will describe the status of this project showing how it will add a new capability to the SAFORAH forestry project giving researchers a powerful tool for environmental and forestry research.


arXiv: Distributed, Parallel, and Cluster Computing | 2010

Cloud Scheduler: a resource manager for distributed compute clouds

Patrick Armstrong; A Agarwal; A. Bishop; Andre Charbonneau; Ronald J. Desmarais; K. Fransham; Norman R. Hill; Ian Gable; Severin J. Gaudet; Sharon Goliath; Roger Impey; Colin Leavett-Brown; J. Ouellete; Michael Paterson; Chris Pritchet; D. Penfold-Brown; Wayne Podaima; David Schade; Randall Sobie

Collaboration


Dive into the Patrick Armstrong's collaboration.

Top Co-Authors

Avatar

Ian Gable

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A Agarwal

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Roger Impey

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Wayne Podaima

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K Fransham

University of Victoria

View shared research outputs
Researchain Logo
Decentralizing Knowledge