A. Carbone
University of Bologna
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. Carbone.
ieee-npss real-time conference | 2005
A. Barczyk; Daniela Bortolotti; A. Carbone; J.P. Dufey; Domenico Galli; B. Gaidioz
Scope of this paper is to report on the measurements performed to test the reliability of high rate data transmission over copper gigabit Ethernet, by using commodity hardware, for the LHCb online system. The reliability of the system is crucial for the functioning of the software trigger layers of the LHCb experiment. The technological challenge in the system implementation consists of handling the expected high data throughput of event fragments using, to large extent, commodity equipment
international conference on e science | 2007
A. Carbone; Luca dell'Agnello; Alberto Forti; Antonia Ghiselli; E. Lanciotti; Luca Magnoni; Mirco Mazzucato; R. Santinelli; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Zappi
High performance disk-storage solutions based on parallel file systems are becoming increasingly important to fulfill the large I/O throughput required by high-energy physics applications. Storage area networks (SAN) are commonly employed at the Large Hadron Collider data centres, and SAN-oriented parallel file systems such as GPFS and Lustre provide high scalability and availability by aggregating many data volumes served by multiple disk-servers into a single POSIX file system hierarchy. Since these file systems do not come with a storage resource manager (SRM) interface, necessary to access and manage the data volumes in a grid environment, a specific project called StoRM has been developed for providing them with the necessary SRM capabilities. In this paper we describe the deployment of a StoRM instance, configured to manage a GPFS file system. A software suite has been realized in order to perform stress tests of functionality and throughput on StoRM. We present the results of these tests.
IEEE Transactions on Nuclear Science | 2010
Marco Bencivenni; Daniela Bortolotti; A. Carbone; Alessandro Cavalli; Andrea Chierici; Stefano Dal Pra; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; Armando Fella; Domenico Galli; Antonia Ghiselli; Daniele Gregori; Alessandro Italiano; Rajeev Kumar; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Gianluca Peco; S. Perazzini; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Davide Salomoni; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or ¿jumbo¿ Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
IEEE Transactions on Nuclear Science | 2008
Marco Bencivenni; F. Bonifazi; A. Carbone; Andrea Chierici; A. D'Apice; D. De Girolamo; Luca dell'Agnello; Massimo Donatelli; G. Donvito; Armando Fella; F. Furano; Domenico Galli; Antonia Ghiselli; Alessandro Italiano; G. Lo Re; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Pier Paolo Ricci; F Rosso; Davide Salomoni; Vladimir Sapunenko; V. Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli; D. Vitlacil; S. Zani
Performance, reliability and scalability in data-access are key issues in the context of the computing Grid and High Energy Physics data processing and analysis applications, in particular considering the large data size and I/O load that a Large Hadron Collider data centre has to support. In this paper we present the technical details and the results of a large scale validation and performance measurement employing different data-access platforms-namely CASTOR, dCache, GPFS and Scalla/Xrootd. The tests have been performed at the CNAF Tier-1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). Our storage back-end was based on Fibre Channel disk-servers organized in a Storage Area Network, being the disk-servers connected to the computing farm via Gigabit LAN. We used 24 disk-servers, 260 TB of raw-disk space and 280 worker nodes as computing clients, able to run concurrently up to about 1100 jobs. The aim of the test was to perform sequential and random read/write accesses to the data, as well as more realistic access patterns, in order to evaluate efficiency, availability, robustness and performance of the various data-access solutions.
IEEE Transactions on Nuclear Science | 2006
A. Barczyk; Daniela Bortolotti; A. Carbone; J.P. Dufey; Domenico Galli; B. Gaidioz; Daniele Gregori; B. Jost; U. Marconi; N. Neufeld; Gianluca Peco; Vincenzo Vagnoni
We report on measurements performed to test the reliability of high rate data transmission over copper Gigabit Ethernet for the LHCb online system. High reliability of such transmissions will be crucial for the functioning of the software trigger layers of the LHCb experiment, at the CERNs LHC accelerator. The technological challenge in the system implementation consists of handling the expected high data throughput of event fragments using, to a large extent, commodity equipment. We report on performance evaluations (throughput, error rates and frame drop) of the main components involved in data transmission: the Ethernet cable, the PCI bus and the operating system (the latest kernel versions of Linux). Three different platforms have been used.
ieee-npss real-time conference | 2009
Marco Bencivenni; A. Carbone; Armando Fella; Domenico Galli; U. Marconi; Gianluca Peco; S. Perazzini; Vincenzo Vagnoni; S. Zani
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements with point-to-point data transfers over 10 Gigabit Ethernet links, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the maximum achievable transfer rate through a network link is not only limited by the capacity of the link itself, but also by those of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or “jumbo” Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
IEEE Transactions on Nuclear Science | 2008
F. Bonifazi; A. Carbone; Domenico Galli; C. Gaspar; Daniele Gregori; U. Marconi; Gianluca Peco; Vincenzo Vagnoni; E. van Herwijnen
The LHCb experiment at CERN will have an on-line trigger farm composed of up to 2000 PCs. In order to monitor and control each PC and to supervise the overall status of the farm, a farm monitoring and control system (FMC) was developed. The FMC is based on distributed information management (DIM) system as network communication layer, it is accessible both through a command line interface and through the Prozeszligvisualisierungs und Steuerungssystem (PVSS) graphical interface, and it is interfaced to the finite state machine (FSM) of the LHCb experiment control system (ECS) in order to manage anomalous farm conditions. The FMC is an integral part of the ECS, which is in charge of monitoring and controlling all on-line components; it uses the same tools (DIM, PVSS, FSM, etc.) to guarantee its complete integration and a coherent look and feel throughout the whole control system.
European Physical Journal C | 2004
P. Salvini; A. Donzella; D. Galli; P. Cerello; S. Marcello; G. Usai; A. Filippi; A. Maggiora; M.P. Bussa; O.E. Gorchakov; L. Ferrero; A. Bianconi; A. Bertin; S. Vecchi; V. Lucherini; B. Giacobbe; T. Bressani; L. Venturelli; M. Poli; E. Lodi Rizzini; S. De Castro; F. Grimaldi; L. Busso; M. Villa; F. Tosello; A. Zenoni; C. Petrascu; V. Filippini; V.I. Tretyak; M. Capponi
Abstract.The spin-parity analysis of the data on the
International Conference on Computing in High Energy and Nuclear Physics (CHEP-07) | 2008
R. Nandakumar; S G Jimenez; M. Adinolfi; R. Bernet; J. Blouw; Daniela Bortolotti; A. Carbone; B M'Charek; D. L. Perego; A Pickford; C. Potterat; M S Miguelez; M Bargiotti; N. H. Brook; A. Casajus; G Castellani; Philippe Charpentier; C Cioffi; J. Closier; R. Graciani Diaz; G Kuznetsov; Stuart Paterson; R. Santinelli; A.C. Smith; A Tsaregorotsev
\bar{p} p \to 2 \pi^ + 2 \pi^-
Physics Letters B | 2003
M. Bargiotti; A. Bertin; M. Bruschi; M. Capponi; A. Carbone; S. De Castro; R. Donà; L. Fabbri; P. Faccioli; D. Galli; Benedetto Giacobbe; F. Grimaldi; U. Marconi; I. Massa; M. Piccinini; M. Poli; R. Spighi; V. Vagnoni; S. Vecchi; M. Villa; A. Vitale; A. Zoccoli; A. Bianconi; M.P. Bussa; M. Corradini; A. Donzella; E. Lodi Rizzini; L. Venturelli; C. Cicalò; A. De Falco
annihilation reaction at rest in liquid and in gaseous hydrogen at 3 bar pressure and in flight at