Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andre Charbonneau is active.

Publication


Featured researches published by Andre Charbonneau.


Journal of Physics: Conference Series | 2008

Deploying HEP applications using Xen and Globus Virtual Workspaces

A Agarwal; Ronald J. Desmarais; Ian Gable; D Grundy; D P-Brown; R Seuster; Daniel C. Vanderster; Andre Charbonneau; R Enge; Randall Sobie

The deployment of HEP applications in heterogeneous grid environments can be challenging because many of the applications are dependent on specific OS versions and have a large number of complex software dependencies. Virtual machine monitors such as Xen could be used to package HEP applications, complete with their execution environments, to run on resources that do not meet their operating system requirements. Our previous work has shown HEP applications running within Xen suffer little or no performance penalty as a result of virtualization. However, a practical strategy is required for remotely deploying, booting, and controlling virtual machines on a remote cluster. One tool that promises to overcome the deployment hurdles using standard grid technology is the Globus Virtual Workspaces project. We describe strategies for the deployment of Xen virtual machines using Globus Virtual Workspace middleware that simplify the deployment of HEP applications.


scientific cloud computing | 2013

HTC scientific computing in a distributed cloud environment

Randall Sobie; A Agarwal; Ian Gable; Colin Leavett-Brown; Michael Paterson; Ryan Taylor; Andre Charbonneau; Roger Impey; Wayne Podiama

This paper describes the use of a distributed cloud computing system for high-throughput computing (HTC) scientific applications. The distributed cloud computing system is composed of a number of separate Infrastructure-as-a-Service (IaaS) clouds that are utilized in a unified infrastructure. The distributed cloud has been in production-quality operation for two years with approximately 500,000 completed jobs where a typical workload has 500 simultaneous embarrassingly-parallel jobs that run for approximately 12 hours. We review the design and implementation of the system which is based on pre-existing components and a number of custom components. We discuss the operation of the system, and describe our plans for the expansion to more sites and increased computing capacity.


Journal of Physics: Conference Series | 2011

A batch system for HEP applications on a distributed IaaS cloud

Ian Gable; A Agarwal; M Anderson; Patrick Armstrong; K Fransham; D Harris C Leavett-Brown; M Paterson; D Penfold-Brown; Randall Sobie; M Vliet; Andre Charbonneau; Roger Impey; Wayne Podaima

The emergence of academic and commercial Infrastructure-as-a-Service (IaaS) clouds is opening access to new resources for the HEP community. In this paper we will describe a system we have developed for creating a single dynamic batch environment spanning multiple IaaS clouds of different types (e.g. Nimbus, OpenNebula, Amazon EC2). A HEP user interacting with the system submits a job description file with a pointer to their VM image. VM images can either be created by users directly or provided to the users. We have created a new software component called Cloud Scheduler that detects waiting jobs and boots the user VM required on any one of the available cloud resources. As the user VMs appear, they are attached to the job queues of a central Condor job scheduler, the job scheduler then submits the jobs to the VMs. The number of VMs available to the user is expanded and contracted dynamically depending on the number of user jobs. We present the motivation and design of the system with particular emphasis on Cloud Scheduler. We show that the system provides the ability to exploit academic and commercial cloud sites in a transparent fashion.


Journal of Physics: Conference Series | 2010

Research computing in a distributed cloud environment

K Fransham; A Agarwal; Patrick Armstrong; A Bishop; Andre Charbonneau; Ronald J. Desmarais; N Hill; Ian Gable; S Gaudet; S Goliath; Roger Impey; Colin Leavett-Brown; J Ouellete; M Paterson; Chris Pritchet; D Penfold-Brown; Wayne Podaima; D Schade; Randall Sobie

The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a users job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.


high performance computing systems and applications | 2008

SpectroGrid: Providing Simple Secure Remote Access to Scientific Instruments

Andre Charbonneau; Victor Terskikh

With the availability of high performance networks and the increase in the number of online scientific instruments, remote instrumentation is a topic with much popularity lately, promising better instrument utilization, easier collaboration between distant organizations and diminution of travel-related costs and overhead. At NRC we needed a simple and secure method for researchers to remotely access nuclear magnetic resonance (NMR) instruments located at the National Ultrahigh-Field NMR Facility for Solids for data acquisition and visualization purposes. This paper discusses the design and implementation of SpectroGrid: a simple remote instrumentation solution based on open source technologies. VNC (virtual network computing) is used as the remote control implementation, and security is provided by the grid security infrastructure (GSI) and secure shell (SSH). A discussion about the cost-saving potentials of SpectroGrid for the Canadian research community will also be given. SpectroGrid is currently being used by Canadian researchers to remotely access NMR instruments located at NRC in Ottawa.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

Dynamic web cache publishing for IaaS clouds using Shoal

Ian Gable; Michael Chester; Patrick Armstrong; F. Berghaus; Andre Charbonneau; Colin Leavett-Brown; Michael Paterson; Robert Prior; Randall Sobie; Ryan Taylor

We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache. In this paper, we describe the design and performance of Shoal.


arXiv: Distributed, Parallel, and Cluster Computing | 2012

Data intensive high energy physics analysis in a distributed cloud

Andre Charbonneau; A Agarwal; M Anderson; Patrick Armstrong; K Fransham; Ian Gable; D Harris; Roger Impey; Colin Leavett-Brown; Michael Paterson; Wayne Podaima; Randall Sobie; M Vliet

We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.


canadian conference on electrical and computer engineering | 2003

Remote services of spectroscopy instruments using grid computing

Mohamed Ahmed; Andre Charbonneau; R. Haria; Roger Impey; Gabriel Mateescu; D. Quesnel

Grid computing supports sharing of distributed resources across boundaries and authorization domains, and can help scientists by providing secure, transparent and easy access to computing resources. We have designed and developed grid-based services for accessing, visualizing, and reliably manipulating remote spectrometry instruments. We harness grid-computing technologies to allow scientists to concentrate on performing experiments and data analysis, without having to become experts in the software technologies enabling these activities.


Journal of Physics: Conference Series | 2008

BaBar MC production on the Canadian grid using a web services approach

A Agarwal; Patrick Armstrong; Ronald J. Desmarais; Ian Gable; S Popov; Simon Ramage; S Schaffer; C Sobie; Randall Sobie; T Sulivan; Daniel C. Vanderster; Gabriel Mateescu; Wayne Podaima; Andre Charbonneau; Roger Impey; M Viswanathan; Darcy Quesnel

The present paper highlights the approach used to design and implement a web services based BaBar Monte Carlo (MC) production grid using Globus Toolkit version 4. The grid integrates the resources of two clusters at the University of Victoria, using the ClassAd mechanism provided by the Condor-G metascheduler. Each cluster uses the Portable Batch System (PBS) as its local resource management system (LRMS). Resource brokering is provided by the Condor matchmaking process, whereby the job and resource attributes are expressed as ClassAds. The important features of the grid are automatic registering of resource ClassAds to the central registry, ClassAds extraction from the registry to the metascheduler for matchmaking, and the incorporation of input/output file staging. Web-based monitoring is employed to track the status of grid resources and the jobs for an efficient operation of the grid. The performance of this new grid for BaBar jobs, and the existing Canadian computational grid (GridX1) based on Globus Toolkit version 2 is found to be consistent.


high performance computing systems and applications | 2007

The GridX1 computational Grid: from a set of service-specific protocols to a service-oriented approach

Gabriel Mateescu; Wayne Podaima; Andre Charbonneau; Roger Impey; Meera Viswanathan; A Agarwal; Patrick Armstrong; Ronald J. Desmarais; Ian Gable; Sergey Popov; Simon Ramage; Randall Sobie; Daniel C. Vanderster; Darcy Quesnel

GridXl is a computational grid designed and built to link resources at a number of research institutions across Canada. Building upon the experience of designing, deploying and operating the first generation of GridXl, we have designed a second-generation, Web-services-based, computational grid. The second generation of GridXl leverages the Web services resource framework, implemented by the Globus Toolkit version 4. The value added by GridXl includes metascheduling, file staging, resource registry and resource monitoring.

Collaboration


Dive into the Andre Charbonneau's collaboration.

Top Co-Authors

Avatar

Ian Gable

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A Agarwal

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Roger Impey

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wayne Podaima

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge