Reda Tafirout
TRIUMF
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Reda Tafirout.
New Journal of Physics | 2004
Pietro Antonioli; Richard Tresch Fienberg; F. Fleurot; Y. Fukuda; W. Fulgione; A. Habig; Jaret Heise; A.B. McDonald; C. Mills; T. Namba; Leif J Robinson; K. Scholberg; Michael Schwendener; Roger W. Sinnott; Blake Stacey; Y. Suzuki; Reda Tafirout; C. Vigorito; B. Viren; C.J. Virtue; A. Zichichi
This paper provides a technical description of the SuperNova Early Warning System (SNEWS), an international network of experiments with the goal of providing an early warning of a galactic supernova.
high performance computing systems and applications | 2008
Denice Deatrich; Simon Liu; Chris Payne; Reda Tafirout; Rodney Walker; Andrew Wong; M. C. Vetterli
The ATLAS experiment at the Large Hadron Collider (LHC), located in Geneva, will collect 3 to 4 petabytes or PB (1015 bytes) of data for each year of its operation, when fully commissioned. Secondary data sets resulting from event reconstruction, reprocessing and calibration will result in an additional 2.5 PB for each year of data taking. Simulated data sets require also significant resources as well nearing 1 PB per year. The data will be distributed worldwide to ten Tier-1 computing centres within the Worldwide LHC computing grid (WLCG) that will operate around the clock. One of these centres is hosted at TRIUMF, Canadas National Laboratory for Particle and Nuclear Physics, located in Vancouver, BC. By the year 2010, the storage capacity at TRIUMF will consist of about 3 Petabyte of disk storage, and 2 PB of tape storage. At present, the disk capacity installed is 750 terabytes or TB (1012 bytes) while the tape capacity is 560 TB, both using state of the art technology. dCache from www.dcache.org is used to manage the entire storage in order to provide a common file namespace. It is a highly scalable solution and highly configurable. In this paper we will describe and review the storage infrastructure and configuration currently in place at the Tier-1 centre at TRIUMF for both disk and tape as well as the management software and tools that have been developed.
Journal of Physics: Conference Series | 2010
Denice Deatrich; Simon Liu; Reda Tafirout
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.
high performance computing systems and applications | 2008
Chris Payne; Denice Deatrich; Simon Liu; Steve McDonald; Reda Tafirout; Rodney Walker; Andrew Wong; M. C. Vetterli
Networking at an ATLAS Tier-1 (Tl) facility is a demanding aspect which is vital to the overall performance and efficiency of the facility. External connectivity of the facility to other tiers of the large hadron collider optical private network (LHCOPN) is largely via dedicated lightpaths as required to meet memorandum of understanding (MOU) commitments. Our primary dedicated link to CERN has an independent, although smaller capacity, dedicated backup link for redundancy. Dedicated lightpaths to the Canadian Tier-2 facilities, and the international partner Tier-1 facilities failover to national and international research networks in the event of failure. The distance between TRIUMF and CERN, and even TRIUMF to some of its Tier-2 facilities in Canada is thousands of kilometers. Transferring data at the hundreds of terabytes of data level (per year) over such distances and complex networks requires both dedicated bandwidth and network resiliency. Failure scenario, including both fail- over and fail-back, must be handled efficiently and for the most part automatically. Although modern network routing protocols handle this well, monitoring processes become key to management of the infrastructure as the complexity of our Tier-1 site connectivity grows. Internal networking efficiency is also vital to the ATLAS computing model. Large data sets are moved from onsite storage to local scratch disks before analysis, and proper network scaling is vital for efficient use of compute nodes. Although 10 Gigabit network infrastructure (routers) is well established, server 10 Gigabit network components and drivers are not as mature as their I Gigabit counterparts, and resource issues have been observed during extreme load testing. In this paper, internal facility networking, as well as external connectivity issues will be discussed.
Journal of Physics: Conference Series | 2008
I Gable; M Bedinelli; S Butterworth; B. Caron; R Chambers; B Fitzgerald; L.S. Groer; R Hatem; V Kupchinsky; P Maddalena; P Marshall; S McDonald; P Mercure; D McWilliam; C Payne; D Pobric; S. H. Robertson; M Rochefort; M Siegert; R. J. Sobie; Reda Tafirout; T Tam; Brigitte Vachon; A. Warburton; G Wu
The ATLAS-Canada computing model consists of a WLCG Tier-1 computing centre located at the TRIUMF Laboratory in Vancouver, Canada, and two distributed Tier-2 computing centres in eastern and western Canadian universities. The TRIUMF Tier-1 is connected to the CERN Tier-0 via a 10G dedicated circuit provided by CANARIE. The Canadian institutions hosting Tier-2 facilities are connected to TRIUMF via 1G lightpaths, and routing between Tier-2s occurs through TRIUMF. This paper discusses the architecture of the ATLAS-Canada network, the challenges of building the network, and the future plans.
high performance computing systems and applications | 2008
Denice Deatrich; Simon Liu; Christian Payne; Reda Tafirout; Rodney Walker; Andrew Wong
Proceedings of the CERN Workshop on LEP 2 | 1996
Gian Francesco Giudice; W. Majerotto; A. Trombini; D. Dominici; S. Asai; F. Franke; Werner Porod; A. Deandrea; R. Rückl; C. Vander Velde; Stefan Pokorski; S. Katsanevas; Xerxes Tata; G. Montagna; Ferruccio Feruglio; Marta Felcini; C.E.M. Wagner; H. Eberl; Ray B. Munroe; Marcela Carena; A. Bartl; Gautam Bhattacharyya; C. Dionisi; W. De Boer; A. Masiero; G. Burkart; H. Fraas; S. Shevchenko; S. Lola; Jean-Francois Grivaz