Vincenzo Vagnoni
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vincenzo Vagnoni.
international conference on e science | 2007
A. Carbone; Luca dell'Agnello; Alberto Forti; Antonia Ghiselli; E. Lanciotti; Luca Magnoni; Mirco Mazzucato; R. Santinelli; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Zappi
High performance disk-storage solutions based on parallel file systems are becoming increasingly important to fulfill the large I/O throughput required by high-energy physics applications. Storage area networks (SAN) are commonly employed at the Large Hadron Collider data centres, and SAN-oriented parallel file systems such as GPFS and Lustre provide high scalability and availability by aggregating many data volumes served by multiple disk-servers into a single POSIX file system hierarchy. Since these file systems do not come with a storage resource manager (SRM) interface, necessary to access and manage the data volumes in a grid environment, a specific project called StoRM has been developed for providing them with the necessary SRM capabilities. In this paper we describe the deployment of a StoRM instance, configured to manage a GPFS file system. A software suite has been realized in order to perform stress tests of functionality and throughput on StoRM. We present the results of these tests.
IEEE Transactions on Nuclear Science | 2010
Marco Bencivenni; Daniela Bortolotti; A. Carbone; Alessandro Cavalli; Andrea Chierici; Stefano Dal Pra; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; Armando Fella; Domenico Galli; Antonia Ghiselli; Daniele Gregori; Alessandro Italiano; Rajeev Kumar; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Gianluca Peco; S. Perazzini; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Davide Salomoni; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or ¿jumbo¿ Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
IEEE Transactions on Nuclear Science | 2006
A. Barczyk; Daniela Bortolotti; A. Carbone; J.P. Dufey; Domenico Galli; B. Gaidioz; Daniele Gregori; B. Jost; U. Marconi; N. Neufeld; Gianluca Peco; Vincenzo Vagnoni
We report on measurements performed to test the reliability of high rate data transmission over copper Gigabit Ethernet for the LHCb online system. High reliability of such transmissions will be crucial for the functioning of the software trigger layers of the LHCb experiment, at the CERNs LHC accelerator. The technological challenge in the system implementation consists of handling the expected high data throughput of event fragments using, to a large extent, commodity equipment. We report on performance evaluations (throughput, error rates and frame drop) of the main components involved in data transmission: the Ethernet cable, the PCI bus and the operating system (the latest kernel versions of Linux). Three different platforms have been used.
ieee-npss real-time conference | 2009
Marco Bencivenni; A. Carbone; Armando Fella; Domenico Galli; U. Marconi; Gianluca Peco; S. Perazzini; Vincenzo Vagnoni; S. Zani
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements with point-to-point data transfers over 10 Gigabit Ethernet links, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the maximum achievable transfer rate through a network link is not only limited by the capacity of the link itself, but also by those of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or “jumbo” Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
IEEE Transactions on Nuclear Science | 2008
F. Bonifazi; A. Carbone; Domenico Galli; C. Gaspar; Daniele Gregori; U. Marconi; Gianluca Peco; Vincenzo Vagnoni; E. van Herwijnen
The LHCb experiment at CERN will have an on-line trigger farm composed of up to 2000 PCs. In order to monitor and control each PC and to supervise the overall status of the farm, a farm monitoring and control system (FMC) was developed. The FMC is based on distributed information management (DIM) system as network communication layer, it is accessible both through a command line interface and through the Prozeszligvisualisierungs und Steuerungssystem (PVSS) graphical interface, and it is interfaced to the finite state machine (FSM) of the LHCb experiment control system (ECS) in order to manage anomalous farm conditions. The FMC is an integral part of the ECS, which is in charge of monitoring and controlling all on-line components; it uses the same tools (DIM, PVSS, FSM, etc.) to guarantee its complete integration and a coherent look and feel throughout the whole control system.
Journal of Physics: Conference Series | 2012
Pier Paolo Ricci; D. Bonacorsi; Alessandro Cavalli; Luca dell'Agnello; Daniele Gregori; Andrea Prosperini; Lorenzo Rinaldi; Vladimir Sapunenko; Vincenzo Vagnoni
The storage system currently used in production at the INFN Tier1 at CNAF is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (the IBM General Parallel File System, GPFS), with a complete integrated tape backend based on the Tivoli Storage Manager (TSM), which provides Hierarchical Storage Management (HSM) capabilities, and the Grid Storage Resource Manager (StoRM), providing access to grid users through a standard SRM interface. Since the start of the Large Hadron Collider (LHC) operation, all LHC experiments have been using GEMSS at CNAF for both disk data access and long-term archival on tape media. Moreover, during last year, GEMSS has become the standard solution for all other experiments hosted at CNAF, allowing the definitive consolidation of the data storage layer. Our choice has proved to be very successful during the last two years of production with continuous enhancements, accurate monitoring and effective customizations according to the end-user requests. In this paper a description of the system is reported, addressing recent developments and giving an overview of the administration and monitoring tools. We also discuss the solutions adopted in order to grant the maximum availability of the service and the latest optimization features within the data access process. Finally, we summarize the main results obtained during these last years of activity from the perspective of some of the end-users, showing the reliability and the high performances that can be achieved using GEMSS.
Computer Physics Communications | 2001
A. Amorim; Vasco Amaral; U. Marconi; Stefan Steinbeck; António Tomé; Vincenzo Vagnoni; H. Wolters
Abstract The database services for the distributed application environment of the HERA-B experiment are presented. Achieving the required 10 6 trigger reduction implies that all reconstruction, including calibration and alignment procedures, must run online, making extensive usage of the database systems. The associations from the events to the database objects are carefully introduced considering efficiency and flexibility. The challenges of managing the slow control information were addressed by introducing data and update objects used in special processing on dedicated servers. The system integrates the DAQ client/server protocols with customized active database servers and relies on a high-performance database support toolkit. For applications that required complex selection mechanisms, as in the data-quality databases, the relevant data is replicated using a relational database management system.
Journal of Physics: Conference Series | 2015
Pier Paolo Ricci; Alessandro Cavalli; Luca dell'Agnello; Matteo Favaro; Daniele Gregori; Andrea Prosperini; Michele Pezzi; Vladimir Sapunenko; Giovanni Zizzi; Vincenzo Vagnoni
The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.
Journal of Physics: Conference Series | 2015
Vladimir Sapunenko; Domenico D'Urso; Luca dell'Agnello; Vincenzo Vagnoni; Matteo Duranti
Data management constitutes one of the major challenges that a geographically- distributed e-Infrastructure has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to on-line and near-line data through high latency networks. The solution is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli Storage Manager (TSM). Both products, developed by IBM, are well known and extensively used in the HEP computing community. Owing to a new feature introduced in GPFS 3.5, so-called Active File Management (AFM), the definition of a single, geographically-distributed namespace, characterised by automated data flow management between different locations, becomes possible. As a practical example, we present the implementation of AFM-based remote data access between two data centres located in Bologna and Rome, demonstrating the validity of the solution for the use case of the AMS experiment, an astro-particle experiment supported by the INFN CNAF data centre with the large disk space requirements (more than 1.5 PB).
Archive | 2011
D. Andreotti; D. Bonacorsi; Alessandro Cavalli; S. Dal Pra; L. dell’Agnello; Alberto Forti; Claudio Grandi; Daniele Gregori; L. Li Gioi; B. Martelli; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; Vladimir Sapunenko; A. Sartirana; Vincenzo Vagnoni; Riccardo Zappi
A brand new Mass Storage System solution called “Grid-Enabled Mass Storage System” (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG “Scale Test for the Experiment Program” (STEP’09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.