Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodney Walker is active.

Publication


Featured researches published by Rodney Walker.


international conference on e science | 2007

Large-Scale ATLAS Simulated Production on EGEE

X. Espinal; D. Barberis; Kors Bos; S. Campana; L. Goossens; J. Kennedy; Guido Negri; S. Padhi; L. Perini; G. Poulard; David Rebatto; S. Resconi; A. de Salvo; Rodney Walker

In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall walltime efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.


21st International Conference On Computing in High Energy and Nuclear Physics (Chep2015),Parts 1-9 | 2015

Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

J A Kennedy; S Kluth; L Mazzaferro; Rodney Walker

The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.


Journal of Physics: Conference Series | 2011

ATLAS Muon Calibration Framework

G. Carlino; Alessandro De Salvo; Andrea Di Simone; Alessandra Doria; Manoj Kumar Jha; Luca Mazzaferro; Rodney Walker

Automated calibration of the ATLAS detector subsystems (like MDT and RPC chambers) are being performed at remote sites, called Remote Calibration Centers. The calibration data for the assigned part of the detector are being processed at these centers and send the result back to CERN for general use in reconstruction and analysis. In this work, we present the recent developments in data discovery mechanism and integration of Ganga as a backend which allows for the specification, submission, bookkeeping and post processing of calibration tasks on a wide set of available heterogeneous resources at remote centers.


Journal of Physics: Conference Series | 2010

ATLAS computing operations within the GridKa Cloud

J Kennedy; C. Serfon; G. Duckeck; Rodney Walker; Andrzej Olszewski; S.K. Nderitu

The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROCs and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2s and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.


Journal of Physics: Conference Series | 2017

How to keep the Grid full and working with ATLAS production and physics jobs

A. Pacheco Pages; F H Barreiro Megino; D. Cameron; F. Fassi; Andrej Filipcic; A. Di Girolamo; S. González de la Hoz; I Glushkov; T. Maeno; Rodney Walker; W Yang

The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.


Journal of Physics: Conference Series | 2012

Experience of using the Chirp distributed file system in ATLAS

Rodney Walker; P. Nilsson

Chirp is a user-level file system specifically designed for the wide area network, and developed by the University of Notre Dame CCL group. We describe the design features making it particularly suited to the Grid environment, and to ATLAS use cases. The deployment and usage within ATLAS distributed computing are discussed, together with scaling tests and evaluation for the various use cases.


Journal of Physics: Conference Series | 2017

Memory handling in the ATLAS submission system from job definition to sites limits

A. Forti; Rodney Walker; T. Maeno; Peter Love; N. Rauschmayr; Andrej Filipcic; A. Di Girolamo


Archive | 2012

D0 Run {II}

A. Zylberstejn; V. Zutshi; D. Wood; A. Zabi; S. Trincaz-Duvoid; B. Hoeneisen; A. Yurkewicz; G. Sajot; B. Zhou; T. Zhao; J. Strandberg; H. Zheng; Y. Yen; D. Zhang; M. Zielinski; E. G. Zverev; G.B. Quinn; F. Villeneuve-Seguier; A. Yurkewic; I. Ripp-Baudot; J. Zhu; D. Whiteson; R. Ströhmer; S. Uvarov; R. K. Shivpuri; A. Sznajder; K. Smolek; M. Wegner; K. Yip; A. Sopczak

Collaboration


Dive into the Rodney Walker's collaboration.

Top Co-Authors

Avatar

T. Maeno

Brookhaven National Laboratory

View shared research outputs
Top Co-Authors

Avatar

A. de Salvo

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Rebatto

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

G. Carlino

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Resconi

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Wegner

RWTH Aachen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge