Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Randall Sobie is active.

Publication


Featured researches published by Randall Sobie.


Future Generation Computer Systems | 2009

Resource allocation on computational grids using a utility model and the knapsack problem

Daniel Vanderster; Nikitas J. Dimopoulos; Rafael Parra-Hernandez; Randall Sobie

This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0-1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.


Future Generation Computer Systems | 2007

GridX1: A Canadian computational grid

A. Agarwal; Mohamed Ahmed; A. Berman; B. L. Caron; A. Charbonneau; D. Deatrich; R. Desmarais; A. Dimopoulos; I. Gable; L. S. Groer; R. Haria; Roger Impey; L. Klektau; C. Lindsay; Gabriel Mateescu; Q. Matthews; A. Norton; W. Podaima; Darcy Quesnel; Rob Simmonds; Randall Sobie; B. St Arnaud; C. Usher; D. C. Vanderster; M. Vetterli; R. Walker; M. Yuen

The present paper discusses the design and application of GridX1, a computational grid project which uses shared resources at several Canadian research institutions. The infrastructure of GridX1 is built using off-the-shelf Globus Toolkit 2 middleware, a MyProxy credential server, and a resource broker based on Condor-G to manage the distributed computing environment. The broker-based job scheduling and management functionality are exposed as a Globus GRAM job service. Resource brokering is based on the Condor matchmaking mechanism, whereby job and resource attributes are expressed as ClassAds, with the attributes Requirements and Rank being used to define respectively the constraints and preferences that the matched entity must meet. Various strategies for ranking resources are presented, including an Estimated-Waiting-Time (EWT) algorithm, a throttled load balancing strategy, and a novel external ranking strategy based on data location. One of the unique features is a mechanism which transparently presents the GridX1 resources as a single compute element to the LHC Computing Grid (LCG), based at the CERN Laboratory in Geneva. This interface was used during the ATLAS data challenge 2 to federate the Canadian resources into the LCG without the overhead of maintaining separate LCG sites. Further, the BaBar particle physics simulation has been adapted to execute on GridX1 and resulted in a simplified management of the production. The usage of the throttled EWT and load balancing strategies combined with external data ranking was found to be very effective in improving efficiency and reducing the job failure rate.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 1993

The data acquisition system of the OPAL detector at LEP

John Baines; F. Beck; H. Burckhart; D. G. Charlton; R. Cranfield; G. Crone; P. A. Elcombe; P. Farthouat; C. Fukunaga; N. I. Geddes; C. N. P. Gee; F.X. Gentit; W. Gorn; J. C. Hart; J. C. Hill; S. J. Hillier; B. Holl; R. E. Hughes-Jones; R. Humbert; M. Jimack; R. W. L. Jones; C. Kleinwort; F. Lamarche; P. Le Du; D. Lellouch; Lorne Levinson; A. Martin; J. P. Martin; F. Meijers; R. P. Middleton

Abstract This report describes the 1991 implementation of the data acquisition system of the OPAL detector at LEP including the additional services and infrastructure necessary for its correct and reliable operation. The various tasks in this “on-line” environment are distributed amongst many VME subsystems, workstations and minicomputers which communicate over general purpose local area networks and special purpose buses. The tasks include data acquisition, control, monitoring, calibration and event reconstruction. The modularity of both hardware and software facilitates the upgrading of the system to meet new requirements.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 1987

A uranium scintillator calorimeter with plastic-fibre readout

Michael Albrow; G. Arnison; J. Bunn; D. Clarke; C. Cochet; P. Colas; D. Dallman; J.P. De Brion; B. Denby; E. Eisenhandler; J. Garvey; G. Grayer; D. Hill; M. Krammer; E. Locci; C. Pigot; D. Robinson; I. Siotis; Randall Sobie; F. Szoncso; P. Verrecchia; Tejinder Virdee; Horst D. Wahl; A. Wildish; C.-E. Wulz

Abstract We have developed a method for reading out scintillator plates in a compact calorimeter using embedded wavelength-shifting fibres coupled to photomultipliers. A test calorimeter using this technique, with uranium plates as the passive medium, was placed in test beams of 1 to 80 GeV. Results on resolution, uniformity, and electron-pion discrimination are presented, as well as a discussion of compensation (the near-equality of electron and hadron responses).


grid computing | 2006

Metascheduling Multiple Resource Types using the MMKP

Daniel C. Vanderster; Nikitas J. Dimopoulos; Randall Sobie

Grid computing involves the transparent sharing of computational resources of many types by users across large geographic distances. The altruistic nature of many current grid resource contributions does not encourage efficient usage of resources. As grid projects mature, increased resource demands coupled with increased economic interests will introduce a requirement for a metascheduler that improves resource utilization, allows administrators to define allocation policies, and provides an overall quality of service to the grid users. In this work we present one such metascheduling framework, based on the multichoice multidimensional knapsack problem (MMKP). This strategy maximizes overall grid utility by selecting desirable options of each task subject to constraints of multiple resource types. We present the framework for the MMKP metascheduler and discuss a selection of allocation policies and their associated utility functions. The MMKP metascheduler and allocation policies are demonstrated using a grid of processor, storage, and network resources. In particular, a data transfer time metric is incorporated into the utility function in order to prefer task options with the lowest data transfer times. The resulting schedules are shown to be consistent with the defined policies


Physics Letters B | 1980

The 6−T=1 resonance in 28si via high-resolution inelastic electron scattering

S. Yen; Randall Sobie; H. Zarek; B.O. Pich; T.E. Drake; C.F. Williamson; S. Kowalski; C.P. Sargent

Abstract High-resolution (e,e′) was used to measure the form factor of the 6 − , T =1 resonance in 28 Si. The results disagree with previous experimental results and with theoretical calculations. The role of meson-exchange currents in producing the observed quenching of magnetic strength, and the relevance of (e,e′) to other reactions are briefly discussed.


Physics Letters B | 1982

A high-resolution measurement of the photofission spectrum of 232Th near threshold

J.W. Knowles; W.F. Mills; R.N. King; B.O. Pich; S. Yen; Randall Sobie; L. Watt; T.E. Drake; L.S. Cardman; R.L. Gulbranson

Abstract The Chalk River bremsstrahlung monochromator at the University of Illinois Microtron Laboratory has been used with a multiwire fission chamber to measure the photofission cross section of 232 Th between 4.95 and 6.76 MeV with a resolution of 12–14 keV. This cross section, which on the average increases exponentially with photon energy, shows three plateaus 5.20 to 5.80, 5.90 to 6.15, and above 6.30 MeV respectively. Structure is observed on eached plateau. In particular there appear to be well-resolved peaks, separated by ≈ 100 keV, at 5.50 and 5.60 MeV on the lowest plateau and other prominent peaks are observed at 5.92, 6.03 and 6.11 MeV on the middle plateau. The closely spaced structure in each region might be interpreted as evidence for the excitation of vibrational states in shallow outer wells of a multiwell potential.


Journal of Physics: Conference Series | 2008

Deploying HEP applications using Xen and Globus Virtual Workspaces

A Agarwal; Ronald J. Desmarais; Ian Gable; D Grundy; D P-Brown; R Seuster; Daniel C. Vanderster; Andre Charbonneau; R Enge; Randall Sobie

The deployment of HEP applications in heterogeneous grid environments can be challenging because many of the applications are dependent on specific OS versions and have a large number of complex software dependencies. Virtual machine monitors such as Xen could be used to package HEP applications, complete with their execution environments, to run on resources that do not meet their operating system requirements. Our previous work has shown HEP applications running within Xen suffer little or no performance penalty as a result of virtualization. However, a practical strategy is required for remotely deploying, booting, and controlling virtual machines on a remote cluster. One tool that promises to overcome the deployment hurdles using standard grid technology is the Globus Virtual Workspaces project. We describe strategies for the deployment of Xen virtual machines using Globus Virtual Workspace middleware that simplify the deployment of HEP applications.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2000

Propagation of errors for matrix inversion

M. Lefebvre; R.K. Keeler; Randall Sobie; J. White

Abstract A formula is given for the propagation of errors during matrix inversion. An explicit calculation for a 2×2 matrix using both the formula and a Monte Carlo calculation are compared. A prescription is given to determine when a matrix with uncertain elements is sufficiently nonsingular for the calculation of the covariances of the inverted matrix elements to be reliable.


scientific cloud computing | 2013

HTC scientific computing in a distributed cloud environment

Randall Sobie; A Agarwal; Ian Gable; Colin Leavett-Brown; Michael Paterson; Ryan Taylor; Andre Charbonneau; Roger Impey; Wayne Podiama

This paper describes the use of a distributed cloud computing system for high-throughput computing (HTC) scientific applications. The distributed cloud computing system is composed of a number of separate Infrastructure-as-a-Service (IaaS) clouds that are utilized in a unified infrastructure. The distributed cloud has been in production-quality operation for two years with approximately 500,000 completed jobs where a typical workload has 500 simultaneous embarrassingly-parallel jobs that run for approximately 12 hours. We review the design and implementation of the system which is based on pre-existing components and a number of custom components. We discuss the operation of the system, and describe our plans for the expansion to more sites and increased computing capacity.

Collaboration


Dive into the Randall Sobie's collaboration.

Top Co-Authors

Avatar

Ian Gable

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A Agarwal

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roger Impey

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Taylor

University of Victoria

View shared research outputs
Researchain Logo
Decentralizing Knowledge