Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. Martelli is active.

Publication


Featured researches published by B. Martelli.


IEEE Transactions on Nuclear Science | 2010

Performance of 10 Gigabit Ethernet Using Commodity Hardware

Marco Bencivenni; Daniela Bortolotti; A. Carbone; Alessandro Cavalli; Andrea Chierici; Stefano Dal Pra; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; Armando Fella; Domenico Galli; Antonia Ghiselli; Daniele Gregori; Alessandro Italiano; Rajeev Kumar; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Gianluca Peco; S. Perazzini; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Davide Salomoni; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli

In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or ¿jumbo¿ Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.


IEEE Transactions on Nuclear Science | 2008

A Comparison of Data-Access Platforms for the Computing of Large Hadron Collider Experiments

Marco Bencivenni; F. Bonifazi; A. Carbone; Andrea Chierici; A. D'Apice; D. De Girolamo; Luca dell'Agnello; Massimo Donatelli; G. Donvito; Armando Fella; F. Furano; Domenico Galli; Antonia Ghiselli; Alessandro Italiano; G. Lo Re; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Pier Paolo Ricci; F Rosso; Davide Salomoni; Vladimir Sapunenko; V. Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli; D. Vitlacil; S. Zani

Performance, reliability and scalability in data-access are key issues in the context of the computing Grid and High Energy Physics data processing and analysis applications, in particular considering the large data size and I/O load that a Large Hadron Collider data centre has to support. In this paper we present the technical details and the results of a large scale validation and performance measurement employing different data-access platforms-namely CASTOR, dCache, GPFS and Scalla/Xrootd. The tests have been performed at the CNAF Tier-1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). Our storage back-end was based on Fibre Channel disk-servers organized in a Storage Area Network, being the disk-servers connected to the computing farm via Gigabit LAN. We used 24 disk-servers, 260 TB of raw-disk space and 280 worker nodes as computing clients, able to run concurrently up to about 1100 jobs. The aim of the test was to perform sequential and random read/write accesses to the data, as well as more realistic access patterns, in order to evaluate efficiency, availability, robustness and performance of the various data-access solutions.


international parallel and distributed processing symposium | 2009

INFN-CNAF activity in the TIER-1 and GRID for LHC experiments

Marco Bencivenni; M. Canaparo; F. Capannini; L. Carota; M. Carpene; Alessandro Cavalli; Andrea Ceccanti; M. Cecchi; Daniele Cesini; Andrea Chierici; V. Ciaschini; A. Cristofori; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; D. N. Dongiovanni; Enrico Fattibene; T. Ferrari; A Ferraro; Alberto Forti; Antonia Ghiselli; Daniele Gregori; G. Guizzunti; Alessandro Italiano; L. Magnoni; B. Martelli; Mirco Mazzucato; Giuseppe Misurelli; Michele Onofri

The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP community leveraging on broader initiatives (e.g. EGEE in Europe, OSG in northen America) as a framework to exchange and maintain data storage and provide computing infrastructure for the entire LHC community. INFN-CNAF in Bologna hosts the Italian Tier-1 site, which represents the biggest italian center in the WLCG distributed computing. In the first part of this paper we will describe on the building of the Italian Tier-1 to cope with the WLCG computing requirements focusing on some peculiarities; in the second part we will analyze the INFN-CNAF contribution for the developement of the grid middleware, stressing in particular the characteristics of the Virtual Organization Membership Service (VOMS), the de facto standard for authorization on a grid, and StoRM, an implementation of the Storage Resource Manager (SRM) specifications for POSIX file systems. In particular StoRM is used at INFN-CNAF in conjunction with General Parallel File System (GPFS) and we are also testing an integration with Tivoli Storage Manager (TSM) to realize a complete Hierarchical Storage Management (HSM).


ieee nuclear science symposium | 2008

A novel approach for mass storage data custodial

A. Carbone; Luca dell'Agnello; Antonia Ghiselli; D. Gregori; Luca Magnoni; B. Martelli; Mirco Mazzucato; P.P. Ricci; Elisabetta Ronchieri; V. Sapunenko; V. Vagnoni; D. Vitlacil; Riccardo Zappi

The mass storage challenge for the Large Hadron Collider (LHC) experiments is still nowadays a critical issue for the various Tier-1 computing centres and the Tier-0 centre involved in the custodial and analysis of the data produced by the experiments. In particular, the requirements for the tape mass storage systems are quite strong, amounting to several PetaBytes of data that should be available for near-line access at any time. Besides the solutions already widely employed by the High Energy Physics community so far, an interesting new option showed up recently. It is based on the interaction between the General Parallel File System (GPFS) and the Tivoli Storage Manager (TSM) by IBM. The new features introduced in GPFS version 3.2 allow to inteface GPFS with tape storage managers. We implemented such an interface for TSM, and performed various performance studies on a pre-production system. Together with the StoRM SRM interface, developed as a joint collaboration between INFN-CNAF and ICTP-Trieste, this solution can fulfill all the requirements of a Tier-1 WLCG centre. The first StoRM-GPFS-TSM based system has now entered its production phase at CNAF, presently adopted by the LHCb experiment. We will describe the implementation of the interface and the prototype test-bed, and we will discuss the results of some tests.


Journal of Physics: Conference Series | 2012

The INFN Tier-1

G Bortolotti; Alessandro Cavalli; L Chiarelli; Andrea Chierici; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; A Ferraro; Daniele Gregori; Alessandro Italiano; B. Martelli; A Mazza; Michele Onofri; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Vladimir Sapunenko; Riccardo Veraldi; C Vistoli S Zani

INFN-CNAF is the central computing facility of INFN: it is the Italian Tier-1 for the experiments at LHC, but also one of the main Italian computing facilities for several other experiments such as BABAR, CDF, SuperB, Virgo, Argo, AMS, Pamela, MAGIC, Auger etc. Currently there is an installed CPU capacity of 100,000 HS06, a net disk capacity of 9 PB and an equivalent amount of tape storage (these figures are going to be increased in the first half of 2012 respectively to 125,000 HS06, 12 PB and 18 PB). More than 80,000 computing jobs are executed daily on the farm, managed by LSF, accessing the storage, managed by GPFS, with an aggregate bandwidth up to several GB/s. The access to the storage system from the farm is direct through the file protocol. The interconnection of the computing resources and the data storage is based on 10 Gbps technology. The disk-servers and the storage systems are connected through a Storage Area Network allowing a complete flexibility and easiness of management; dedicated disk-servers are connected, also via the SAN, to the tape library. The INFN Tier-1 is connected to the other centers via 3×10 Gbps links (to be upgraded at the end of 2012), including the LHCOPN and to the LHCONE. In this paper we show the main results of our center after 2 full years of run of LHC.


Archive | 2011

First Experiences with CMS Data Storage on the GEMSS System at the INFN-CNAF Tier-1

D. Andreotti; D. Bonacorsi; Alessandro Cavalli; S. Dal Pra; L. dell’Agnello; Alberto Forti; Claudio Grandi; Daniele Gregori; L. Li Gioi; B. Martelli; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; Vladimir Sapunenko; A. Sartirana; Vincenzo Vagnoni; Riccardo Zappi

A brand new Mass Storage System solution called “Grid-Enabled Mass Storage System” (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG “Scale Test for the Experiment Program” (STEP’09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.


Journal of Physics: Conference Series | 2010

A lightweight high availability strategy for Atlas LCG File Catalogs

B. Martelli; Alessandro De Salvo; Daniela Anzellotti; Lorenzo Rinaldi; Alessandro Cavalli; Stefano Dal Pra; Luca dell'Agnello; Daniele Gregori; Andrea Prosperini; Pier Paolo Ricci; Vladimir Sapunenko

The LCG File Catalog is a key component of the LHC Computing Grid middleware [1], as it contains the mapping between Logical File Names and Physical File Names on the Grid. The Atlas computing model foresees multiple local LFC housed in each Tier-1 and Tier-0, containing all information about files stored in the regional cloud. As the local LFC contents are presently not replicated anywhere, this turns out in a dangerous single point of failure for all of the Atlas regional clouds. In order to solve this problem we propose a novel solution for high availability (HA) of Oracle based Grid services, obtained by composing an Oracle Data Guard deployment and a series of application level scripts. This approach has the advantage of being very easy to deploy and maintain, and represents a good candidate solution for all Tier-2s which are usually little centres with little manpower dedicated to service operations. We also present the results of a wide range of functionality and performance tests run on a test-bed having characteristics similar to the ones required for production. The test-bed consists of a failover deployment between the Italian LHC Tier-1 (INFN – CNAF) and an Atlas Tier-2 located at INFN – Roma1. Moreover, we explain how the proposed strategy can be deployed on the present Grid infrastructure, without requiring any change to the middleware and in a way that is totally transparent to end users and applications.


Journal of Physics: Conference Series | 2010

Commissioning of a StoRM based Data Management System for ATLAS at INFN sites

A Brunengo; C Ciocca; M Corosu; M Pistolese; F Prelz; Lorenzo Rinaldi; Elisabetta Ronchieri; Vladimir Sapunenko; A Andreazza; S Barberis; G Carlino; Alessandro Cavalli; S Dal Pra; Luca dell'Agnello; Daniele Gregori; B. Martelli; L Perini; Andrea Prosperini; Pier Paolo Ricci; D. Vitlacil

In the framework of WLCG, Tier-1s need to manage large volumes of data ranging in the PB scale. Moreover they need to be able to transfer data, from CERN and with the other centres (both Tier-1s and Tier-2s) with a sustained throughput of the order of hundreds of MB/s over the WAN offering at the same time a fast and reliable access also to the computing farm. In order to cope with these challenging requirements, at INFN Tier-1 we have adopted a storage model based on StoRM/GPFS/TSM for the D1T0 and D1T1 Storage Classes and on CASTOR for the D0T1. In this paper we present the results of the commissioning tests of this system for the ATLAS experiment reproducing the real production case with a full matrix transfer from the Tier-0 and with all the other involved centres. Noticeably also the new approach of direct file access from farm to data is covered showing positive results. GPFS/StoRM has also been sucessfully deployed, configured and commissioned as storage solution for an ATLAS INFN Tier-2, specifically the one of Milano. The results are shown and discussed in this paper together with the ones obtained for the Tier-1.


ieee nuclear science symposium | 2009

Activities and performance optimization of the Italian computing centers supporting the ATLAS experiment

Elisabetta Vilucchi; A. Andreazza; Daniela Anzellotti; Dario Barberis; Alessandro Brunengo; S. Campana; G. Carlino; Claudia Ciocca; Mirko Corosu; Maria Curatolo; Luca dell'Agnello; Alessandro De Salvo; Alessandro Di Girolamo; Alessandra Doria; Maria Lorenza Ferrer; Alberto Forti; Alessandro Italiano; Lamberto Luminari; Luca Magnoni; B. Martelli; Agnese Martini; Leonardo Merola; Elisa Musto; L. Perini; Massimo Pistolese; David Rebatto; S. Resconi; Lorenzo Rinaldi; Davide Salomoni; Luca Vaccarossa

With this work we present the activity and performance optimization of the Italian computing centers supporting the ATLAS experiment forming the so-called Italian Cloud. We describe the activities of the ATLAS Italian Tier-2s Federation inside the ATLAS computing model and present some Italian original contributions. We describe StoRM, a new Storage Resource Manager developed by INFN, as a replacement of Castor at CNAF - the Italian Tier-1 - and under test at the Tier-2 centers. We also show the failover solution for the ATLAS LFC, based on Oracle DataGuard, load-balancing DNS and LFC daemon reconfiguration, realized between CNAF and the Tier-2 in Roma. Finally we describe the sharing of resources between Analysis and Production, recently implemented in the ATLAS Italian Cloud with the Job Priority mechanism.


Journal of Physics: Conference Series | 2008

Storage management solutions and performance tests at the INFN Tier-1

Marco Bencivenni; A. Carbone; Andrea Chierici; A. D'Apice; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; G. Donvito; Armando Fella; A Forti; F. Furano; Domenico Galli; Antonia Ghiselli; Alessandro Italiano; E Lanciotti; G L Re; L Magnoni; U. Marconi; B. Martelli; Mirco Mazzucato; Pier Paolo Ricci; F Rosso; Davide Salomoni; R Santinelli; Vladimir Sapunenko; V. Vagnoni; Riccardo Veraldi; D. Vitlacil; S. Zani; R Zappi

Performance, reliability and scalability in data access are key issues in the context of HEP data processing and analysis applications. In this paper we present the results of a large scale performance measurement performed at the INFN-CNAF Tier-1, employing some storage solutions presently available for HEP computing, namely CASTOR, GPFS, Scalla/Xrootd and dCache. The storage infrastructure was based on Fibre Channel systems organized in a Storage Area Network, providing 260 TB of total disk space, and 24 disk servers connected to the computing farm (280 worker nodes) via Gigabit LAN. We also describe the deployment of a StoRM SRM instance at CNAF, configured to manage a GPFS file system, presenting and discussing its performances.

Collaboration


Dive into the B. Martelli's collaboration.

Top Co-Authors

Avatar

Luca dell'Agnello

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisabetta Ronchieri

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Alessandro Italiano

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Andrea Chierici

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Antonia Ghiselli

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Mirco Mazzucato

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge