Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony Tiradani is active.

Publication


Featured researches published by Anthony Tiradani.


21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015) | 2015

The Diverse use of Clouds by CMS

Anastasios Andronis; Daniela Bauer; Olivier Chaze; David Colling; M. Dobson; Simon Fayer; M. Girone; Claudio Grandi; Adam Huffman; Dirk Hufnagel; Farrukh Aftab Khan; Andrew Lahiff; Alison McCrae; Duncan Rand; Massimo Sgaravatto; Anthony Tiradani; Xiaomei Zhang

The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the CMS Tier 1 sites, when it is not running as part of the trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources.We present a snap shot of this infrastructure and its operation at the time of the CHEP2015 conference.


Computing and Software for Big Science | 2017

arXiv : HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

Burt Holzman; M. Girone; Dirk Hufnagel; Dave Dykstra; Hyunwoo Kim; Steve Timm; Oliver Gutsche; S. Fuess; L. A. T. Bauerdick; Anthony Tiradani; Panagiotis Spentzouris; N. Magini; Brian Bockelman; Eric Wayne Vaandering; Robert Kennedy; D. Mason; G. Garzoglio; I. Fisk

Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.


Journal of Physics: Conference Series | 2017

IOP : Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits

J Balcas; K Larson; F Aftab Khan; M Mascheroni; J Letts; A Perez-Calero Yzquierdo; Anthony Tiradani; K Hurtado Anampa; Dirk Hufnagel; J Marra da Silva; D. R. Mason; B Bockelman

The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome. 1. The CMS global pool The CMS Global Pool is a single HTCondor [1] pool covering all Grid computing processing resources pledged to CMS plus significant Cloud and opportunistic resources. Resource provisioning is performed by a glideinWMS [2] frontend, which contacts several glideinWMS factories in order to submit pilot jobs to sites. Payload jobs are then matched to pilots by the HTCondor Negotiator which runs as part of the Central Manager of the pool, as can be seen in Figure 1. The other main element of the HTCondor Central Manager is the Collector, which maintains information about the various HTCondor pool daemons described below. The main components of this Global Pool include a glideinWMS frontend and factories, the HTCondor Central Manager and Condor Connection Broker (CCB), deployed in 24-core 48GB (RAM) virtual machines (VMs) running on hypervisors with 10 Gbps ethernet connectivity. Such a set up is deployed at CERN with an analogous infrastructure for High Availability (HA)


Journal of Physics: Conference Series | 2014

Using the CMS High Level Trigger as a Cloud Resource

David Colling; Adam Huffman; Alison McCrae; Andrew Lahiff; Claudio Grandi; Mattia Cinquilli; S. J. Gowdy; Jose Antonio Coarasa; Anthony Tiradani; Wojciech Ozga; Olivier Chaze; Massimo Sgaravatto; Daniela Bauer

The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiments triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.


Journal of Physics: Conference Series | 2017

IOP : Connecting restricted, high-availability, or low-latency resources to a seamless Global Pool for CMS

J Balcas; A Mohapatra; A Perez-Calero Yzquierdo; K Larson; D. R. Mason; S Piperov; Verguilov; F A Khan; J Letts; M Mascheroni; Anthony Tiradani; K Hurtado Anampa; Dirk Hufnagel; J Marra da Silva; B Jayatilaka; B Bockelman

The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.


Journal of Physics: Conference Series | 2014

Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

Parag Mhashilkar; Anthony Tiradani; Burt Holzman; Krista Larson; I. Sfiligoi; Mats Rynge


cluster computing and the grid | 2018

Intelligently-Automated Facilities Expansion with the HEPCloud Decision Engine

Parag Mhashilkar; Mine Altunay; William Dagenhart; S. Fuess; Burt Holzman; Jim Kowalkowski; Dmitry Litvintsev; Qiming Lu; Alexander Moibenko; Marc Paterno; Panagiotis Spentzouris; Steven Timm; Anthony Tiradani


Archive | 2017

Fermilab HEPCloud Facility Decision Engine Design

Anthony Tiradani; Mine Altunay; David Dagenhart; Jim Kowalkowski; Dmitry Litvintsev; Qiming Lu; Parag Mhashilkar; Alexander Moibenko; Marc Paterno; Steven Timm


international conference on cloud computing and services science | 2016

THE glideinWMS APPROACH TO THE OWNERSHIP OF SYSTEM IMAGES IN THE CLOUD WORLD

I. Sfiligoi; Anthony Tiradani; Burt Holzman; D C Bradley


Proceedings of Science | 2014

OSG PKI transition: Experiences and lessons learned

Von Welch; Alain Deximo; Soichi Hayashi; Viplav D. Khadke; Rohan Mathure; Robert Quick; Mine Altunay; Chander Sehgal; Anthony Tiradani; Jim Basney

Collaboration


Dive into the Anthony Tiradani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. R. Mason

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

I. Sfiligoi

University of California

View shared research outputs
Top Co-Authors

Avatar

J Letts

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge