Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin Leavett-Brown is active.

Publication


Featured researches published by Colin Leavett-Brown.


scientific cloud computing | 2013

HTC scientific computing in a distributed cloud environment

Randall Sobie; A Agarwal; Ian Gable; Colin Leavett-Brown; Michael Paterson; Ryan Taylor; Andre Charbonneau; Roger Impey; Wayne Podiama

This paper describes the use of a distributed cloud computing system for high-throughput computing (HTC) scientific applications. The distributed cloud computing system is composed of a number of separate Infrastructure-as-a-Service (IaaS) clouds that are utilized in a unified infrastructure. The distributed cloud has been in production-quality operation for two years with approximately 500,000 completed jobs where a typical workload has 500 simultaneous embarrassingly-parallel jobs that run for approximately 12 hours. We review the design and implementation of the system which is based on pre-existing components and a number of custom components. We discuss the operation of the system, and describe our plans for the expansion to more sites and increased computing capacity.


Journal of Physics: Conference Series | 2010

Research computing in a distributed cloud environment

K Fransham; A Agarwal; Patrick Armstrong; A Bishop; Andre Charbonneau; Ronald J. Desmarais; N Hill; Ian Gable; S Gaudet; S Goliath; Roger Impey; Colin Leavett-Brown; J Ouellete; M Paterson; Chris Pritchet; D Penfold-Brown; Wayne Podaima; D Schade; Randall Sobie

The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a users job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

Dynamic web cache publishing for IaaS clouds using Shoal

Ian Gable; Michael Chester; Patrick Armstrong; F. Berghaus; Andre Charbonneau; Colin Leavett-Brown; Michael Paterson; Robert Prior; Randall Sobie; Ryan Taylor

We have developed a highly scalable application, called Shoal, for tracking and utilizing a distributed set of HTTP web caches. Squid servers advertise their existence to the Shoal server via AMQP messaging by running Shoal Agent. The Shoal server provides a simple REST interface that allows clients to determine their closest Squid cache. Our goal is to dynamically instantiate Squid caches on IaaS clouds in response to client demand. Shoal provides the VMs on IaaS clouds with the location of the nearest dynamically instantiated Squid Cache. In this paper, we describe the design and performance of Shoal.


arXiv: Distributed, Parallel, and Cluster Computing | 2012

Data intensive high energy physics analysis in a distributed cloud

Andre Charbonneau; A Agarwal; M Anderson; Patrick Armstrong; K Fransham; Ian Gable; D Harris; Roger Impey; Colin Leavett-Brown; Michael Paterson; Wayne Podaima; Randall Sobie; M Vliet

We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.


ieee international conference on high performance computing data and analytics | 2012

Efficient LHC Data Distribution across 100Gbps Networks

Harvey Newman; Artur Barczyk; Azher Mughal; Sandor Rozsa; Ramiro Voicu; I. Legrand; Steven Lo; Dorian Kcira; Randall Sobie; Ian Gable; Colin Leavett-Brown; Yvan Savard; Thomas Tam; Marilyn Hay; Shawn Patrick McKee; Roy Hocket; Ben Meekhof; Sergio Timoteo

During Supercomputing 2012 (SC12), an international team of high energy physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech), the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt and other partners, smashed their previous records for data transfers using the latest generation of wide area network circuitsWith three 100 gigabit/sec (100 Gbps) wide area network circuits [1] set up by the SCinet, Internet2, CENIC, CANARIE and BCnet, Starlight and US LHCNet network teams, and servers at each of the sites with 40 gigabit Ethernet (40GE) interfaces, the team reached a record transfer rate of 339 Gbps between Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. This nearly doubled last years overall record, and eclipsed the record for a bidirectional transfer on a single link with a data flow of 187 Gbps between Victoria and Salt Lake.


arXiv: Distributed, Parallel, and Cluster Computing | 2018

Federating Distributed Storage For Clouds In ATLAS

F. Berghaus; M. Lassnig; Kevin Casteels; Fabrizio Furano; Alessandro Di Girolamo; Colin Leavett-Brown; Michael Paterson; Ryan Taylor; Rolf Seuster; Randall Sobie; C. Serfon; Reda Tafirout; Marcus Ebert; Fernando Fernandez Galindo

Input data for applications that run in cloud computing centres can be stored at distant repositories, often with multiple copies of the popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. A federation would locate the closest copy of the data on the basis of GeoIP information. Currently we are using the dynamic data federation Dynafed, a software solution developed by CERN IT. Dynafed supports several industry standards for connection protocols like Amazons S3, Microsofts Azure, as well as WebDAV and HTTP. Dynafed functions as an abstraction layer under which protocol-dependent authentication details are hidden from the user, requiring the user to only provide an X509 certificate. We have setup an instance of Dynafed and integrated it into the ATLAS data distribution management system. We report on the challenges faced during the installation and integration. We have tested ATLAS analysis jobs submitted by the PanDA production system and we report on our first experiences with its operation.


Journal of Physics: Conference Series | 2017

Enabling Research Network Connectivity to Clouds with Virtual Router Technology

R Seuster; K Casteels; Colin Leavett-Brown; M Paterson; Randall Sobie

The use of opportunistic cloud resources by HEP experiments has significantly increased over the past few years. Clouds that are owned or managed by the HEP community are connected to the LHCONE network or the research network with global access to HEP computing resources. Private clouds, such as those supported by non-HEP research funds are generally connected to the international research network; however, commercial clouds are either not connected to the research network or only connect to research sites within their national boundaries. Since research network connectivity is a requirement for HEP applications, we need to find a solution that provides a high-speed connection. We are studying a solution with a virtual router that will address the use case when a commercial cloud has research network connectivity in a limited region. In this situation, we host a virtual router in our HEP site and require that all traffic from the commercial site transit through the virtual router. Although this may increase the network path and also the load on the HEP site, it is a workable solution that would enable the use of the remote cloud for low I/O applications. We are exploring some simple open-source solutions. In this paper, we present the results of our studies and how it will benefit our use of private and public clouds for HEP computing.


scientific cloud computing | 2014

HEP computing in a context-aware cloud environment

F. Berghaus; Ronald J. Desmarais; Ian Gable; Colin Leavett-Brown; Michael Paterson; Ryan Taylor; Andre Charbonneau; Randall Sobie

This paper describes the use of a distributed cloud computing system for high energy physics (HEP) applications. The system is composed of IaaS clouds integrated into a unified infrastructure that has been in production for over two years. It continues to expand in scale and sites, encompassing more than twenty clouds on three continents. We are prototyping a new context-aware architecture that enables the virtual machines to make connections to both software and data repositories based on geolocation information. The new design will significantly enhance the ability of the system to scale to higher workloads and run data-intensive applications. We review the operation of the production system and describe our work towards a context-aware cloud system.


Journal of Physics: Conference Series | 2012

Disk-to-Disk network transfers at 100 Gb/s

Artur Barczyk; Ian Gable; Marilyn Hay; Colin Leavett-Brown; I. Legrand; Kim Lewall; Shawn Patrick McKee; Donald McWilliam; Azher Mughal; Harvey B Newman; Sandor Rozsa; Yvan Savard; Randall Sobie; Thomas Tam; Ramiro Voicu

A 100 Gbps network was established between the California Institute of Technology conference booth at the Super Computing 2011 conference in Seattle, Washington and the computing center at the University of Victoria in Canada. A circuit was established over the BCNET, CANARIE and Super Computing (SCInet) networks using dedicated equipment. The small set of servers at the endpoints used a combination of 10GE and 40GE technologies, and SSD drives for data storage. The configuration of the network and the server configuration are discussed. We will show that the system was able to achieve disk-to-disk transfer rates of 60 Gbps and memory-to-memory rates in excess of 180 Gbps across the WAN. We will discuss the transfer tools, disk configurations, and monitoring tools used in the demonstration.


Journal of Physics: Conference Series | 2010

dCache with tape storage for High Energy Physics applications

A Agarwal; R Enge; K Fransham; E Kolb; Colin Leavett-Brown; D Leske; K Lewall; H Reitsma; E Rempel; Randall Sobie

An interface between dCache and the local Tivoli Storage Manager (TSM) tape storage facility has been developed at the University of Victoria (UVic) for High Energy Physics (HEP) applications. The interface is responsible for transferring the data from disk pools to tape and retrieving data from tape to disk pools. It also checks the consistency between the PNFS filename space and the TSM database. The dCache system, consisting of a single admin node with two pool nodes, is configured to have two read pools and one write pool. The pools are attached to the TSM storage that has a capacity of about 100TB. This system is being used in production at UVic as part of a Tier A site for BaBar Tau analysis. An independent dCache system is also in production for the storage element (SE) of the ATLAS experiment as a part of Canadian Tier-2 sites. This system does not currently employ a tape storage facility, however, it can be added in the future.

Collaboration


Dive into the Colin Leavett-Brown's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Gable

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A Agarwal

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Ryan Taylor

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roger Impey

National Research Council

View shared research outputs
Top Co-Authors

Avatar

F. Berghaus

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

K Fransham

University of Victoria

View shared research outputs
Researchain Logo
Decentralizing Knowledge