Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luca dell'Agnello is active.

Publication


Featured researches published by Luca dell'Agnello.


international conference on e science | 2007

Performance Studies of the StoRM Storage Resource Manager

A. Carbone; Luca dell'Agnello; Alberto Forti; Antonia Ghiselli; E. Lanciotti; Luca Magnoni; Mirco Mazzucato; R. Santinelli; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Zappi

High performance disk-storage solutions based on parallel file systems are becoming increasingly important to fulfill the large I/O throughput required by high-energy physics applications. Storage area networks (SAN) are commonly employed at the Large Hadron Collider data centres, and SAN-oriented parallel file systems such as GPFS and Lustre provide high scalability and availability by aggregating many data volumes served by multiple disk-servers into a single POSIX file system hierarchy. Since these file systems do not come with a storage resource manager (SRM) interface, necessary to access and manage the data volumes in a grid environment, a specific project called StoRM has been developed for providing them with the necessary SRM capabilities. In this paper we describe the deployment of a StoRM instance, configured to manage a GPFS file system. A software suite has been realized in order to perform stress tests of functionality and throughput on StoRM. We present the results of these tests.


Future Generation Computer Systems | 2005

The INFN-grid testbed

Roberto Alfieri; R. Barbera; P. Belluomo; A. Cavalli; R. Cecchini; A. Chierici; V. Ciaschini; Luca dell'Agnello; F. Donno; E. Ferro; A. Forte; Luciano Gaido; Antonia Ghiselli; A. Gianoli; A. Italiano; S. Lusso; M. Luvisetto; P. Mastroserio; M. Mazzucato; D. Mura; M. Reale; L. Salconi; G. Sava; M. Serra; F. Spataro; F. Taurino; G. Tortone; L. Vaccarossa; M. Verlato; G. Vita Finzi

The Italian INFN-Grid Project is committed to set-up, run and manage an unprecedented nation-wide Grid infrastructure. The implementation and use of this INFN-Grid Testbed is presented and discussed. Particular care and attention are devoted to those activities, relevant for the management of the Testbed, carried out by the INFN within international Grid Projects.


IEEE Transactions on Nuclear Science | 2010

Performance of 10 Gigabit Ethernet Using Commodity Hardware

Marco Bencivenni; Daniela Bortolotti; A. Carbone; Alessandro Cavalli; Andrea Chierici; Stefano Dal Pra; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; Armando Fella; Domenico Galli; Antonia Ghiselli; Daniele Gregori; Alessandro Italiano; Rajeev Kumar; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Gianluca Peco; S. Perazzini; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Davide Salomoni; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli

In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or ¿jumbo¿ Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.


IEEE Transactions on Nuclear Science | 2008

A Comparison of Data-Access Platforms for the Computing of Large Hadron Collider Experiments

Marco Bencivenni; F. Bonifazi; A. Carbone; Andrea Chierici; A. D'Apice; D. De Girolamo; Luca dell'Agnello; Massimo Donatelli; G. Donvito; Armando Fella; F. Furano; Domenico Galli; Antonia Ghiselli; Alessandro Italiano; G. Lo Re; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Pier Paolo Ricci; F Rosso; Davide Salomoni; Vladimir Sapunenko; V. Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli; D. Vitlacil; S. Zani

Performance, reliability and scalability in data-access are key issues in the context of the computing Grid and High Energy Physics data processing and analysis applications, in particular considering the large data size and I/O load that a Large Hadron Collider data centre has to support. In this paper we present the technical details and the results of a large scale validation and performance measurement employing different data-access platforms-namely CASTOR, dCache, GPFS and Scalla/Xrootd. The tests have been performed at the CNAF Tier-1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). Our storage back-end was based on Fibre Channel disk-servers organized in a Storage Area Network, being the disk-servers connected to the computing farm via Gigabit LAN. We used 24 disk-servers, 260 TB of raw-disk space and 280 worker nodes as computing clients, able to run concurrently up to about 1100 jobs. The aim of the test was to perform sequential and random read/write accesses to the data, as well as more realistic access patterns, in order to evaluate efficiency, availability, robustness and performance of the various data-access solutions.


international parallel and distributed processing symposium | 2009

INFN-CNAF activity in the TIER-1 and GRID for LHC experiments

Marco Bencivenni; M. Canaparo; F. Capannini; L. Carota; M. Carpene; Alessandro Cavalli; Andrea Ceccanti; M. Cecchi; Daniele Cesini; Andrea Chierici; V. Ciaschini; A. Cristofori; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; D. N. Dongiovanni; Enrico Fattibene; T. Ferrari; A Ferraro; Alberto Forti; Antonia Ghiselli; Daniele Gregori; G. Guizzunti; Alessandro Italiano; L. Magnoni; B. Martelli; Mirco Mazzucato; Giuseppe Misurelli; Michele Onofri

The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP community leveraging on broader initiatives (e.g. EGEE in Europe, OSG in northen America) as a framework to exchange and maintain data storage and provide computing infrastructure for the entire LHC community. INFN-CNAF in Bologna hosts the Italian Tier-1 site, which represents the biggest italian center in the WLCG distributed computing. In the first part of this paper we will describe on the building of the Italian Tier-1 to cope with the WLCG computing requirements focusing on some peculiarities; in the second part we will analyze the INFN-CNAF contribution for the developement of the grid middleware, stressing in particular the characteristics of the Virtual Organization Membership Service (VOMS), the de facto standard for authorization on a grid, and StoRM, an implementation of the Storage Resource Manager (SRM) specifications for POSIX file systems. In particular StoRM is used at INFN-CNAF in conjunction with General Parallel File System (GPFS) and we are also testing an integration with Tivoli Storage Manager (TSM) to realize a complete Hierarchical Storage Management (HSM).


Journal of Physics: Conference Series | 2012

The Grid Enabled Mass Storage System (GEMSS): the Storage and Data management system used at the INFN Tier1 at CNAF.

Pier Paolo Ricci; D. Bonacorsi; Alessandro Cavalli; Luca dell'Agnello; Daniele Gregori; Andrea Prosperini; Lorenzo Rinaldi; Vladimir Sapunenko; Vincenzo Vagnoni

The storage system currently used in production at the INFN Tier1 at CNAF is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (the IBM General Parallel File System, GPFS), with a complete integrated tape backend based on the Tivoli Storage Manager (TSM), which provides Hierarchical Storage Management (HSM) capabilities, and the Grid Storage Resource Manager (StoRM), providing access to grid users through a standard SRM interface. Since the start of the Large Hadron Collider (LHC) operation, all LHC experiments have been using GEMSS at CNAF for both disk data access and long-term archival on tape media. Moreover, during last year, GEMSS has become the standard solution for all other experiments hosted at CNAF, allowing the definitive consolidation of the data storage layer. Our choice has proved to be very successful during the last two years of production with continuous enhancements, accurate monitoring and effective customizations according to the end-user requests. In this paper a description of the system is reported, addressing recent developments and giving an overview of the administration and monitoring tools. We also discuss the solutions adopted in order to grant the maximum availability of the service and the latest optimization features within the data access process. Finally, we summarize the main results obtained during these last years of activity from the perspective of some of the end-users, showing the reliability and the high performances that can be achieved using GEMSS.


ieee nuclear science symposium | 2008

A novel approach for mass storage data custodial

A. Carbone; Luca dell'Agnello; Antonia Ghiselli; D. Gregori; Luca Magnoni; B. Martelli; Mirco Mazzucato; P.P. Ricci; Elisabetta Ronchieri; V. Sapunenko; V. Vagnoni; D. Vitlacil; Riccardo Zappi

The mass storage challenge for the Large Hadron Collider (LHC) experiments is still nowadays a critical issue for the various Tier-1 computing centres and the Tier-0 centre involved in the custodial and analysis of the data produced by the experiments. In particular, the requirements for the tape mass storage systems are quite strong, amounting to several PetaBytes of data that should be available for near-line access at any time. Besides the solutions already widely employed by the High Energy Physics community so far, an interesting new option showed up recently. It is based on the interaction between the General Parallel File System (GPFS) and the Tivoli Storage Manager (TSM) by IBM. The new features introduced in GPFS version 3.2 allow to inteface GPFS with tape storage managers. We implemented such an interface for TSM, and performed various performance studies on a pre-production system. Together with the StoRM SRM interface, developed as a joint collaboration between INFN-CNAF and ICTP-Trieste, this solution can fulfill all the requirements of a Tier-1 WLCG centre. The first StoRM-GPFS-TSM based system has now entered its production phase at CNAF, presently adopted by the LHCb experiment. We will describe the implementation of the interface and the prototype test-bed, and we will discuss the results of some tests.


Journal of Physics: Conference Series | 2015

The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

Pier Paolo Ricci; Alessandro Cavalli; Luca dell'Agnello; Matteo Favaro; Daniele Gregori; Andrea Prosperini; Michele Pezzi; Vladimir Sapunenko; Giovanni Zizzi; Vincenzo Vagnoni

The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.


Journal of Physics: Conference Series | 2015

An integrated solution for remote data access

Vladimir Sapunenko; Domenico D'Urso; Luca dell'Agnello; Vincenzo Vagnoni; Matteo Duranti

Data management constitutes one of the major challenges that a geographically- distributed e-Infrastructure has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to on-line and near-line data through high latency networks. The solution is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli Storage Manager (TSM). Both products, developed by IBM, are well known and extensively used in the HEP computing community. Owing to a new feature introduced in GPFS 3.5, so-called Active File Management (AFM), the definition of a single, geographically-distributed namespace, characterised by automated data flow management between different locations, becomes possible. As a practical example, we present the implementation of AFM-based remote data access between two data centres located in Bologna and Rome, demonstrating the validity of the solution for the use case of the AMS experiment, an astro-particle experiment supported by the INFN CNAF data centre with the large disk space requirements (more than 1.5 PB).


Journal of Physics: Conference Series | 2014

Long Term Data Preservation for CDF at INFN-CNAF

S Amerio; L Chiarelli; Luca dell'Agnello; D De Girolamo; Daniele Gregori; M Pezzi; Andrea Prosperini; Pier Paolo Ricci; F Rosso; S. Zani

Long-term preservation of experimental data (intended as both raw and derived formats) is one of the emerging requirements coming from scientific collaborations. Within the High Energy Physics community the Data Preservation in High Energy Physics (DPHEP) group coordinates this effort. CNAF is not only one of the Tier-1s for the LHC experiments, it is also a computing center providing computing and storage resources to many other HEP and non-HEP scientific collaborations, including the CDF experiment. After the end of data taking in 2011, CDF is now facing the challenge to both preserve the large amount of data produced during several years of data taking and to retain the ability to access and reuse it in the future. CNAF is heavily involved in the CDF Data Preservation activities, in collaboration with the Fermilab National Laboratory (FNAL) computing sector. At the moment about 4 PB of data (raw data and analysis-level ntuples) are starting to be copied from FNAL to the CNAF tape library and the framework to subsequently access the data is being set up. In parallel to the data access system, a data analysis framework is being developed which allows to run the complete CDF analysis chain in the long term future, from raw data reprocessing to analysis-level ntuple production. In this contribution we illustrate the technical solutions we put in place to address the issues encountered as we proceeded in this activity.

Collaboration


Dive into the Luca dell'Agnello's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Martelli

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Elisabetta Ronchieri

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Antonia Ghiselli

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Alessandro Italiano

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Andrea Chierici

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Mirco Mazzucato

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge