Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Italiano is active.

Publication


Featured researches published by Alessandro Italiano.


Journal of Physics: Conference Series | 2011

WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

Davide Salomoni; Alessandro Italiano; Elisabetta Ronchieri

INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.


IEEE Transactions on Nuclear Science | 2010

Performance of 10 Gigabit Ethernet Using Commodity Hardware

Marco Bencivenni; Daniela Bortolotti; A. Carbone; Alessandro Cavalli; Andrea Chierici; Stefano Dal Pra; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; Armando Fella; Domenico Galli; Antonia Ghiselli; Daniele Gregori; Alessandro Italiano; Rajeev Kumar; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Gianluca Peco; S. Perazzini; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Davide Salomoni; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli

In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or ¿jumbo¿ Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.


IEEE Transactions on Nuclear Science | 2008

A Comparison of Data-Access Platforms for the Computing of Large Hadron Collider Experiments

Marco Bencivenni; F. Bonifazi; A. Carbone; Andrea Chierici; A. D'Apice; D. De Girolamo; Luca dell'Agnello; Massimo Donatelli; G. Donvito; Armando Fella; F. Furano; Domenico Galli; Antonia Ghiselli; Alessandro Italiano; G. Lo Re; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Pier Paolo Ricci; F Rosso; Davide Salomoni; Vladimir Sapunenko; V. Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli; D. Vitlacil; S. Zani

Performance, reliability and scalability in data-access are key issues in the context of the computing Grid and High Energy Physics data processing and analysis applications, in particular considering the large data size and I/O load that a Large Hadron Collider data centre has to support. In this paper we present the technical details and the results of a large scale validation and performance measurement employing different data-access platforms-namely CASTOR, dCache, GPFS and Scalla/Xrootd. The tests have been performed at the CNAF Tier-1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). Our storage back-end was based on Fibre Channel disk-servers organized in a Storage Area Network, being the disk-servers connected to the computing farm via Gigabit LAN. We used 24 disk-servers, 260 TB of raw-disk space and 280 worker nodes as computing clients, able to run concurrently up to about 1100 jobs. The aim of the test was to perform sequential and random read/write accesses to the data, as well as more realistic access patterns, in order to evaluate efficiency, availability, robustness and performance of the various data-access solutions.


international parallel and distributed processing symposium | 2009

INFN-CNAF activity in the TIER-1 and GRID for LHC experiments

Marco Bencivenni; M. Canaparo; F. Capannini; L. Carota; M. Carpene; Alessandro Cavalli; Andrea Ceccanti; M. Cecchi; Daniele Cesini; Andrea Chierici; V. Ciaschini; A. Cristofori; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; D. N. Dongiovanni; Enrico Fattibene; T. Ferrari; A Ferraro; Alberto Forti; Antonia Ghiselli; Daniele Gregori; G. Guizzunti; Alessandro Italiano; L. Magnoni; B. Martelli; Mirco Mazzucato; Giuseppe Misurelli; Michele Onofri

The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP community leveraging on broader initiatives (e.g. EGEE in Europe, OSG in northen America) as a framework to exchange and maintain data storage and provide computing infrastructure for the entire LHC community. INFN-CNAF in Bologna hosts the Italian Tier-1 site, which represents the biggest italian center in the WLCG distributed computing. In the first part of this paper we will describe on the building of the Italian Tier-1 to cope with the WLCG computing requirements focusing on some peculiarities; in the second part we will analyze the INFN-CNAF contribution for the developement of the grid middleware, stressing in particular the characteristics of the Virtual Organization Membership Service (VOMS), the de facto standard for authorization on a grid, and StoRM, an implementation of the Storage Resource Manager (SRM) specifications for POSIX file systems. In particular StoRM is used at INFN-CNAF in conjunction with General Parallel File System (GPFS) and we are also testing an integration with Tivoli Storage Manager (TSM) to realize a complete Hierarchical Storage Management (HSM).


Journal of Physics: Conference Series | 2010

Deployment of job priority mechanisms in the Italian Cloud of the ATLAS experiment

Alessandra Doria; Alex Barchiesi; S. Campana; G. Carlino; Claudia Ciocca; Alessandro De Salvo; Alessandro Italiano; Elisa Musto; L. Perini; Massimo Pistolese; Lorenzo Rinaldi; Davide Salomoni; Luca Vaccarossa; Elisabetta Vilucchi

An optimized use of the Grid computing resources in the ATLAS experiment requires the enforcement of a mechanism of job priorities and of resource sharing among the different activities inside the ATLAS VO. This mechanism has been implemented through the VOViews publication in the information system and the fair share implementation per UNIX group in the batch system. The VOView concept consists of publishing resource information, such as running and waiting jobs, as a function of VO groups and roles. The ATLAS Italian Cloud is composed of the CNAF Tier1 and Roma Tier2, with farms based on the LSF batch system, and the Tier2s of Frascati, Milano and Napoli based on PBS/Torque. In this paper we describe how test and deployment of the job priorities has been performed in the cloud, where the VOMS-based regional group /atlas/it has been created. We show that the VOViews are published and correctly managed by the WMS and that the resources allocated to generic VO users, users with production role and users of the /atlas/it group correspond to the defined share.


Journal of Physics: Conference Series | 2011

Virtual pools for interactive analysis and software development through an integrated Cloud environment

C Grandi; Alessandro Italiano; Davide Salomoni; A K Calabrese Melcarne

WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.


Journal of Physics: Conference Series | 2008

Enabling a priority-based fair share in the EGEE infrastructure

Daniele Cesini; V. Ciaschini; D. N. Dongiovanni; A Ferraro; Alberto Forti; Antonia Ghiselli; Alessandro Italiano; Davide Salomoni

While starting to use the Grid in production, applications have begun to request the implementation of complex policies regarding the use of resources. Some Virtual Organizations (VOs) want to divide their users in different priority brackets and classify the resources in different classes, others instead do not need advanced setups and are satisfied in considering all users and resources equal. Resource managers have to work for enabling these requirements on their site, in addition to the work necessary to implement policies regarding the use of their resources, to ensure compliance with Acceptable Use Policies. These requirements end up prescribing the existence of a security framework not only capable to satisfy them, but that must also be scalable and flexible enough in order to do not need continuous and unnecessary low-level tweaking of the configuration setup every time the requirements change. Any security framework implementing these priorities should not require constant tweaking by site administrators. Here we will describe in detail the layout used in several Italian sites of the EGEE (Enabling Grid for E-sciencE) infrastructure to deal with these requirements, along with a complete rationale of our choices, with the intent of clarifying what issues an administrator may run into when dealing with priority requirements, and what common pitfalls should be avoided at any cost. Beyond the feedback on interfaces for policy management, from VO and site administrators, we will especially report on the aspects coming from the mapping of Grid level policies to local computing resource authorization mechanisms at Grid sites and how they interfere from a management and security point of view.


Journal of Physics: Conference Series | 2012

The INFN Tier-1

G Bortolotti; Alessandro Cavalli; L Chiarelli; Andrea Chierici; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; A Ferraro; Daniele Gregori; Alessandro Italiano; B. Martelli; A Mazza; Michele Onofri; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Vladimir Sapunenko; Riccardo Veraldi; C Vistoli S Zani

INFN-CNAF is the central computing facility of INFN: it is the Italian Tier-1 for the experiments at LHC, but also one of the main Italian computing facilities for several other experiments such as BABAR, CDF, SuperB, Virgo, Argo, AMS, Pamela, MAGIC, Auger etc. Currently there is an installed CPU capacity of 100,000 HS06, a net disk capacity of 9 PB and an equivalent amount of tape storage (these figures are going to be increased in the first half of 2012 respectively to 125,000 HS06, 12 PB and 18 PB). More than 80,000 computing jobs are executed daily on the farm, managed by LSF, accessing the storage, managed by GPFS, with an aggregate bandwidth up to several GB/s. The access to the storage system from the farm is direct through the file protocol. The interconnection of the computing resources and the data storage is based on 10 Gbps technology. The disk-servers and the storage systems are connected through a Storage Area Network allowing a complete flexibility and easiness of management; dedicated disk-servers are connected, also via the SAN, to the tape library. The INFN Tier-1 is connected to the other centers via 3×10 Gbps links (to be upgraded at the end of 2012), including the LHCOPN and to the LHCONE. In this paper we show the main results of our center after 2 full years of run of LHC.


ieee nuclear science symposium | 2009

Activities and performance optimization of the Italian computing centers supporting the ATLAS experiment

Elisabetta Vilucchi; A. Andreazza; Daniela Anzellotti; Dario Barberis; Alessandro Brunengo; S. Campana; G. Carlino; Claudia Ciocca; Mirko Corosu; Maria Curatolo; Luca dell'Agnello; Alessandro De Salvo; Alessandro Di Girolamo; Alessandra Doria; Maria Lorenza Ferrer; Alberto Forti; Alessandro Italiano; Lamberto Luminari; Luca Magnoni; B. Martelli; Agnese Martini; Leonardo Merola; Elisa Musto; L. Perini; Massimo Pistolese; David Rebatto; S. Resconi; Lorenzo Rinaldi; Davide Salomoni; Luca Vaccarossa

With this work we present the activity and performance optimization of the Italian computing centers supporting the ATLAS experiment forming the so-called Italian Cloud. We describe the activities of the ATLAS Italian Tier-2s Federation inside the ATLAS computing model and present some Italian original contributions. We describe StoRM, a new Storage Resource Manager developed by INFN, as a replacement of Castor at CNAF - the Italian Tier-1 - and under test at the Tier-2 centers. We also show the failover solution for the ATLAS LFC, based on Oracle DataGuard, load-balancing DNS and LFC daemon reconfiguration, realized between CNAF and the Tier-2 in Roma. Finally we describe the sharing of resources between Analysis and Production, recently implemented in the ATLAS Italian Cloud with the Job Priority mechanism.


Journal of Physics: Conference Series | 2008

Storage management solutions and performance tests at the INFN Tier-1

Marco Bencivenni; A. Carbone; Andrea Chierici; A. D'Apice; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; G. Donvito; Armando Fella; A Forti; F. Furano; Domenico Galli; Antonia Ghiselli; Alessandro Italiano; E Lanciotti; G L Re; L Magnoni; U. Marconi; B. Martelli; Mirco Mazzucato; Pier Paolo Ricci; F Rosso; Davide Salomoni; R Santinelli; Vladimir Sapunenko; V. Vagnoni; Riccardo Veraldi; D. Vitlacil; S. Zani; R Zappi

Performance, reliability and scalability in data access are key issues in the context of HEP data processing and analysis applications. In this paper we present the results of a large scale performance measurement performed at the INFN-CNAF Tier-1, employing some storage solutions presently available for HEP computing, namely CASTOR, GPFS, Scalla/Xrootd and dCache. The storage infrastructure was based on Fibre Channel systems organized in a Storage Area Network, providing 260 TB of total disk space, and 24 disk servers connected to the computing farm (280 worker nodes) via Gigabit LAN. We also describe the deployment of a StoRM SRM instance at CNAF, configured to manage a GPFS file system, presenting and discussing its performances.

Collaboration


Dive into the Alessandro Italiano's collaboration.

Top Co-Authors

Avatar

Davide Salomoni

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Andrea Chierici

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Luca dell'Agnello

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

B. Martelli

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Antonia Ghiselli

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Elisabetta Ronchieri

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Mirco Mazzucato

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Riccardo Veraldi

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge