Riccardo Zappi
Istituto Nazionale di Fisica Nucleare
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Riccardo Zappi.
Journal of Physics: Conference Series | 2008
Flavia Donno; Lana Abadie; Paolo Badino; Jean Philippe Baud; Ezio Corso; Shaun De Witt; Patrick Fuhrmann; Junmin Gu; B. Koblitz; Sophie Lemaitre; Maarten Litmaath; Dimitry Litvintsev; Giuseppe Lo Presti; L. Magnoni; Gavin McCance; Tigran Mkrtchan; Rémi Mollon; Vijaya Natarajan; Timur Perelmutov; D. Petravick; Arie Shoshani; Alex Sim; David Smith; Paolo Tedesco; Riccardo Zappi
Storage Services are crucial components of the Worldwide LHC Computing Grid Infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the four LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very efficient interface to the various storage solutions adopted by the WLCG sites. In this work we report on the experience acquired during the definition of the Storage Resource Manager v2.2 protocol. In particular, we focus on the study performed to enhance the interface and make it suitable for use by the WLCG communities. At the moment 5 different storage solutions implement the SRM v2.2 interface: BeStMan (LBNL), CASTOR (CERN and RAL), dCache (DESY and FNAL), DPM (CERN), and StoRM (INFN and ICTP). After a detailed inside review of the protocol, various test suites have been written identifying the most effective set of tests: the S2 test suite from CERN and the SRM- Tester test suite from LBNL. Such test suites have helped verifying the consistency and coherence of the proposed protocol and validating existing implementations. We conclude our work describing the results achieved.
Archive | 2008
Jesus Luna; Michail D. Flouris; Manolis Marazakis; Angelos Bilas; Federico Stagni; Alberto Forti; Antonia Ghiselli; Luca Magnoni; Riccardo Zappi
With the wide-spread deployment of Data Grid installations, and rapidly increasing data volumes, storage services are becoming a critical aspect of the Grid infrastructure. Due to the distributed and shared nature of the Grid, security issues related with state of the art data storage services need to be studied thoroughly to identify potential vulnerabilities and attack vectors. In this paper, motivated by a typical use-case for Data Grid storage, we apply an extended framework for analyzing and evaluating its security from the point of view of the data and metadata, taking into consideration the security capabilities provided by both the underlying Grid infrastructure and commonly deployed Grid storage systems. For a comprehensive analysis of the latter, we identify three important elements: the players being involved, the underlying trust assumptions and the dependencies on specic security primitives. This analysis leads to the identication of a set of potential security gaps, risks, and even redundant security features found in a typical Data Grid. These results are now the starting point for our ongoing research on policies and mechanisms able to provide a fair balance between security and performance for Data Grid Storage Services.
ieee conference on mass storage systems and technologies | 2007
Lana Abadie; Paolo Badino; J.-P. Baud; Ezio Corso; M. Crawford; S. De Witt; Flavia Donno; A. Forti; Ákos Frohner; Patrick Fuhrmann; G. Grosdidier; Junmin Gu; Jens Jensen; B. Koblitz; Sophie Lemaitre; Maarten Litmaath; D. Litvinsev; G. Lo Presti; L. Magnoni; T. Mkrtchan; Alexander Moibenko; Rémi Mollon; Vijaya Natarajan; Gene Oleynik; Timur Perelmutov; D. Petravick; Arie Shoshani; Alex Sim; David Smith; M. Sponza
Storage management is one of the most important enabling technologies for large-scale scientific investigations. Having to deal with multiple heterogeneous storage and file systems is one of the major bottlenecks in managing, replicating, and accessing files in distributed environments. Storage resource managers (SRMs), named after their Web services control protocol, provide the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities. SRMs are grid storage services providing interfaces to storage resources, as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. They call on transport services to bring files into their space transparently and provide effective sharing of files. SRMs are based on a common specification that emerged over time and evolved into an international collaboration. This approach of an open specification that can be used by various institutions to adapt to their own storage systems has proven to be a remarkable success - the challenge has been to provide a consistent homogeneous interface to the grid, while allowing sites to have diverse infrastructures. In particular, supporting optional features while preserving interoperability is one of the main challenges we describe in this paper. We also describe using SRM in a large international high energy physics collaboration, called WLCG, to prepare to handle the large volume of data expected when the Large Hadron Collider (LHC) goes online at CERN. This intense collaboration led to refinements and additional functionality in the SRM specification, and the development of multiple interoperating implementations of SRM for various complex multi- component storage systems.
Archive | 2011
Riccardo Zappi; Elisabetta Ronchieri; Alberto Forti; Antonia Ghiselli
In production data Grids, high performance disk storage solutions using parallel file systems are becoming increasingly important to provide reliability and high speed I/O operations needed by High Energy Physics analysis farms. Today, Storage Area Network solutions are commonly deployed at Large Hadron Collider data centres, and parallel file systems such as GPFS and Lustre provide reliable, high speed native POSIX I/O operations in parallel fashion. In this paper, we describe StoRM, a Grid middleware component, implementing the standard Storage Resource Manager v2.2 interface. Its architecture fully exploits the potentiality offered by the underlying cluster file system. Indeed, it enables and encourages the use of the native POSIX file protocol (i.e. ”file://”) allowing managed Storage Element to improve job efficiency in data accessing. The job running on the worker node can perform a direct access to the Storage Element managed by StoRM as if it were a local disk, instead of transferring data from Storage Elements to the local disk.
international parallel and distributed processing symposium | 2009
Marco Bencivenni; M. Canaparo; F. Capannini; L. Carota; M. Carpene; Alessandro Cavalli; Andrea Ceccanti; M. Cecchi; Daniele Cesini; Andrea Chierici; V. Ciaschini; A. Cristofori; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; D. N. Dongiovanni; Enrico Fattibene; T. Ferrari; A Ferraro; Alberto Forti; Antonia Ghiselli; Daniele Gregori; G. Guizzunti; Alessandro Italiano; L. Magnoni; B. Martelli; Mirco Mazzucato; Giuseppe Misurelli; Michele Onofri
The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP community leveraging on broader initiatives (e.g. EGEE in Europe, OSG in northen America) as a framework to exchange and maintain data storage and provide computing infrastructure for the entire LHC community. INFN-CNAF in Bologna hosts the Italian Tier-1 site, which represents the biggest italian center in the WLCG distributed computing. In the first part of this paper we will describe on the building of the Italian Tier-1 to cope with the WLCG computing requirements focusing on some peculiarities; in the second part we will analyze the INFN-CNAF contribution for the developement of the grid middleware, stressing in particular the characteristics of the Virtual Organization Membership Service (VOMS), the de facto standard for authorization on a grid, and StoRM, an implementation of the Storage Resource Manager (SRM) specifications for POSIX file systems. In particular StoRM is used at INFN-CNAF in conjunction with General Parallel File System (GPFS) and we are also testing an integration with Tivoli Storage Manager (TSM) to realize a complete Hierarchical Storage Management (HSM).
ieee nuclear science symposium | 2008
A. Carbone; Luca dell'Agnello; Antonia Ghiselli; D. Gregori; Luca Magnoni; B. Martelli; Mirco Mazzucato; P.P. Ricci; Elisabetta Ronchieri; V. Sapunenko; V. Vagnoni; D. Vitlacil; Riccardo Zappi
The mass storage challenge for the Large Hadron Collider (LHC) experiments is still nowadays a critical issue for the various Tier-1 computing centres and the Tier-0 centre involved in the custodial and analysis of the data produced by the experiments. In particular, the requirements for the tape mass storage systems are quite strong, amounting to several PetaBytes of data that should be available for near-line access at any time. Besides the solutions already widely employed by the High Energy Physics community so far, an interesting new option showed up recently. It is based on the interaction between the General Parallel File System (GPFS) and the Tivoli Storage Manager (TSM) by IBM. The new features introduced in GPFS version 3.2 allow to inteface GPFS with tape storage managers. We implemented such an interface for TSM, and performed various performance studies on a pre-production system. Together with the StoRM SRM interface, developed as a joint collaboration between INFN-CNAF and ICTP-Trieste, this solution can fulfill all the requirements of a Tier-1 WLCG centre. The first StoRM-GPFS-TSM based system has now entered its production phase at CNAF, presently adopted by the LHCb experiment. We will describe the implementation of the interface and the prototype test-bed, and we will discuss the results of some tests.
ieee nuclear science symposium | 2008
Luca Magnoni; Riccardo Zappi; Antonia Ghiselli
In these times, scientific data intensive applications generate ever-increasing volumes of data that need to be stored, managed, and shared between geographically distributed communities. Data centres are normally able to provide tens of petabytes of storage space through a large variety of heterogeneous storage and file systems. However, storage systems shared by applications need a common data access mechanism, which allocates storage space dynamically, manages stored content, and automatically remove unused data to avoid clogging data stores. To accommodate these needs, the concept of Storage Resource Managers (SRMs) was devised in the context of a project that involved High Energy Physics (HEP) and Nuclear Physics (NP). The Storage Resource Manager (SRM) interface specification was defined and evolved into an international collaboration in the context of the Open Grid Forum (OGF). SRM interface provides the technology needed to share geographically distributed heterogeneous storage resources, with an effective and common interface regardless of the type of the back-end system being used. By implementing the SRM interface, grid storage services provide a consistent homogeneous interface to the Grid to manage storage resource as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. Within Worldwide LHC Grid project exists more than five interoperating implementations of SRM services, and every one shows peculiarity. In this paper, we describe the flexibility of StoRM service, an implementation of the Storage Resource Manager Interface version 2.2. StoRM is designed to foster the adoption of cluster file systems, and thanks to the marked flexibility, StoRM can be used in small data centre with human resource deficiency to administer an other grid service and, at the same time, capable to grow in terms of storage managed and workload. StoRM can be used to manage any storage resources with any kind of POSIX file-system in a transparent way. As demonstration of the StoRM flexibility, the paper describes how applications scheduled via Grid can access files on a file-system directly via POSIX calls, how StoRM can be deployed in a clustered configuration to address scalability needs and finally how StoRM can be used to manage also storage classes based on Storage Cloud, like Amazon Simple Storage Service (S3).
Archive | 2011
D. Andreotti; D. Bonacorsi; Alessandro Cavalli; S. Dal Pra; L. dell’Agnello; Alberto Forti; Claudio Grandi; Daniele Gregori; L. Li Gioi; B. Martelli; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; Vladimir Sapunenko; A. Sartirana; Vincenzo Vagnoni; Riccardo Zappi
A brand new Mass Storage System solution called “Grid-Enabled Mass Storage System” (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG “Scale Test for the Experiment Program” (STEP’09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.
ieee nuclear science symposium | 2009
Elisabetta Vilucchi; A. Andreazza; Daniela Anzellotti; Dario Barberis; Alessandro Brunengo; S. Campana; G. Carlino; Claudia Ciocca; Mirko Corosu; Maria Curatolo; Luca dell'Agnello; Alessandro De Salvo; Alessandro Di Girolamo; Alessandra Doria; Maria Lorenza Ferrer; Alberto Forti; Alessandro Italiano; Lamberto Luminari; Luca Magnoni; B. Martelli; Agnese Martini; Leonardo Merola; Elisa Musto; L. Perini; Massimo Pistolese; David Rebatto; S. Resconi; Lorenzo Rinaldi; Davide Salomoni; Luca Vaccarossa
With this work we present the activity and performance optimization of the Italian computing centers supporting the ATLAS experiment forming the so-called Italian Cloud. We describe the activities of the ATLAS Italian Tier-2s Federation inside the ATLAS computing model and present some Italian original contributions. We describe StoRM, a new Storage Resource Manager developed by INFN, as a replacement of Castor at CNAF - the Italian Tier-1 - and under test at the Tier-2 centers. We also show the failover solution for the ATLAS LFC, based on Oracle DataGuard, load-balancing DNS and LFC daemon reconfiguration, realized between CNAF and the Tier-2 in Roma. Finally we describe the sharing of resources between Analysis and Production, recently implemented in the ATLAS Italian Cloud with the Job Priority mechanism.
Journal of Physics: Conference Series | 2011
Elisabetta Ronchieri; Michele Dibenedetto; Riccardo Zappi; Stefano Dal Pra; Cristina Aiftimiei; Sergio Traldi
StoRM is an implementation of the SRM interface version 2.2 used by all Large Hadron Collider (LHC) experiments and non-LHC experiments as SRM endpoint at different Tiers of Worldwide LHC Computing Grid. The complexity of its services and the demand of experiments and users are increasing day by day. The growing needs in terms of service level by the StoRM users communities make it necessary to design and implement a more effective testing procedure to quickly and reliably validate new StoRM candidate releases both in code side (for example via test units, and schema valuator) and in final product software (for example via functionality tests, and stress tests). Testing software service is a very critical quality activity performed in a very ad-hoc informal manner by developers, testers and users of StoRM up to now. In this paper, we describe the certification mechanism used by StoRM team to increase the robustness and reliability of the StoRM services. Various typologies of tests, such as quality, installation, configuration, functionality, stress and performance, defined on the base of a set of use cases gathered as consequence of the collaboration among the StoRM team, experiments and users, are illustrated. Each typology of test is either increased or decreased easily from time to time. The proposed mechanism is based on a new configurable testsuite. This is executed by the certification team, who is responsible for validating the release candidate package as well as bug fix (or patch) package, given a certain testbed that considers all possible use cases. In correspondence of each failure, the package is given back to developers waiting for validating a new package.