Elisabetta Ronchieri
Istituto Nazionale di Fisica Nucleare
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elisabetta Ronchieri.
Archive | 2004
P. Andreetto; Daniel Kouřil; Valentina Borgia; Aleš Křenek; A. Dorigo; Luděk Matyska; A. Gianelle; Miloš Mulač; M. Mordacchini; Jan Pospíšil; Massimo Sgaravatto; Miroslav Ruda; L. Zangrando; Zdeněk Salvet; S. Andreozzi; Jiří Sitera; Vincenzo Ciaschini; Jiří Škrabal; C. Di Giusto; Michal Voců; Francesco Giacomini; V. Martelli; V. Medici; Massimo Mezzadri; Elisabetta Ronchieri; Francesco Prelz; V. Venturi; D. Rebatto; Giuseppe Avellino; Salvatore Monforte
Resource management and scheduling of distributed, data-driven applications in a Grid environment are challenging problems. Although significant results were achieved in the past few years, the development and the proper deployment of generic, reliable, standard components present issues that still need to be completely solved. Interested domains include workload management, resource discovery, resource matchmaking and brokering, accounting, authorization policies, resource access, reliability and dependability. The evolution towards a service-oriented architecture, supported by emerging standards, is another activity that will demand attention. All these issues are being tackled within the EU-funded EGEE project (Enabling Grids for E-science in Europe), whose primary goals are the provision of robust middleware components and the creation of a reliable and dependable Grid infrastructure to support e-Science applications. In this paper we present the plans and the preliminary activities aiming at providing adequate workload and resource management components, suitable to be deployed in a production-quality Grid.
Journal of Grid Computing | 2004
G. Avellino; S. Beco; B. Cantalupo; A. Maraschini; F. Pacini; M. Sottilaro; A. Terracina; David Colling; F. Giacomini; Elisabetta Ronchieri; A. Gianelle; M. Mazzucato; R. Peluso; M. Sgaravatto; Andrea Guarise; R. Piro; Albert Werbrouck; Daniel Kouřil; Aleš Křenek; Ludek Matyska; Miloš Mulač; Jan Pospíšil; Miroslav Ruda; Zdeněk Salvet; Jiří Sitera; Jiří Škrabal; Michal Voců; M. Mezzadri; F. Prelz; S. Monforte
The workload management task of the DataGrid project was mandated to define and implement a suitable architecture for distributed scheduling and resource management in a Grid environment. The result was the design and implementation of a Grid Workload Management System, a super-scheduler with the distinguishing property of being able to take data access requirements into account when scheduling jobs to the available Grid resources. Many novel issues in various fields were faced such as resource management, resource reservation and co-allocation, Grid accounting. In this paper, the architecture and the functionality provided by the DataGrid Workload Management System are presented.
Future Generation Computer Systems | 2005
Chiara Curti; Tiziana Ferrari; Leon Gommans; S. van Oudenaarde; Elisabetta Ronchieri; Francesco Giacomini; Cristina Vistoli
The availability of information about properties and status of resources is essential for Grid resource brokers. However, while abstractions of computing and storage resources already exist, the notion of Grid network resource is far from being understood today. As a result, the integration of advanced network services is still difficult when a Grid system spans large-scale heterogeneous network infrastructures. In this paper, we propose a single definition of a Grid network resource abstraction for multiple types of network connectivity. This abstraction was successfully implemented and tested in a network resource management prototype supporting a variety of network technologies.
Journal of Physics: Conference Series | 2011
Davide Salomoni; Alessandro Italiano; Elisabetta Ronchieri
INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.
IEEE Transactions on Nuclear Science | 2010
Marco Bencivenni; Daniela Bortolotti; A. Carbone; Alessandro Cavalli; Andrea Chierici; Stefano Dal Pra; Donato De Girolamo; Luca dell'Agnello; Massimo Donatelli; Armando Fella; Domenico Galli; Antonia Ghiselli; Daniele Gregori; Alessandro Italiano; Rajeev Kumar; U. Marconi; B. Martelli; Mirco Mazzucato; Michele Onofri; Gianluca Peco; S. Perazzini; Andrea Prosperini; Pier Paolo Ricci; Elisabetta Ronchieri; F Rosso; Davide Salomoni; Vladimir Sapunenko; Vincenzo Vagnoni; Riccardo Veraldi; Maria Cristina Vistoli
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or ¿jumbo¿ Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
international conference on computational science | 2006
Dave Colling; Luke Dickens; Tiziana Ferrari; Y. Hassoun; Constantinos Kotsokalis; Marko Krznaric; Janusz Martyniak; Andrew Stephen McGough; Elisabetta Ronchieri
Many Grid architectures have been developed in recent years. These range from the large community Grids such as LHG and EGEE to single site deployments such as Condor. However, these Grid architectures have tended to focus on the single or batch submission of executable jobs. Application scientists are now seeking to manage and use physical instrumentation on the Grid, integrating these with the computational tasks they already perform. This will require the functionality of current Grid systems to be extended to allow the submission of entire workflows. Thus allowing the scientists to perform increasingly larger parts of their experiments within the Grid environment. We propose here a set of high level services which may be used on-top of these existing Grid architectures such that the benefits of these architectures may be exploited along with the new functionality of workflows.
Archive | 2004
Daniel Kouřil; Aleš Křenek; Luděk Matyska; Miloš Mulač; Jan Pospíšil; Miroslav Ruda; Zdeněk Salvet; Jiří Sitera; Jiří Škrabal; Michal Voců; P. Andreetto; Valentina Borgia; A. Dorigo; A. Gianelle; M. Mordacchini; Massimo Sgaravatto; L. Zangrando; S. Andreozzi; Vincenzo Ciaschini; C. Di Giusto; Francesco Giacomini; V. Medici; Elisabetta Ronchieri; Giuseppe Avellino; Stefano Beco; Alessandro Maraschini; Fabrizio Pacini; Annalisa Terracina; Andrea Guarise; G. Patania
The Logging and Bookkeeping service tracks jobs passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen LB server to provide the job state. The events are transported through secure and reliable channels. Job tracking is fully distributed and does not depend on a single information source, the robustness is achieved through speculative job state computation in case of reordered, delayed or lost events. The state computation is easily adaptable to modified job control flow.
grid computing | 2015
Marco Bencivenni; Diego Michelotto; Roberto Alfieri; Riccardo Brunetti; Andrea Ceccanti; Daniele Cesini; Alessandro Costantini; Enrico Fattibene; Luciano Gaido; Giuseppe Misurelli; Elisabetta Ronchieri; Davide Salomoni; Paolo Veronesi; Valerio Venturi; Maria Cristina Vistoli
Distributed Computing Infrastructures have dedicated mechanisms to provide user communities with computational environments. While in the last decade the Grid has demonstrated to be a powerful paradigm in supporting scientific research, the complexity of the user experience still limits its adoption by unskilled user communities. Command line interfaces, X.509 certificates, template customization for job submission and data access tools require end-users to dedicate significant learning effort and thus represent a barrier to access Grid computing facilities. In this paper, we present a Web portal that solves the aforementioned limitations by providing simplified access to Grid and Cloud computing services. The portal provides a set of interfaces that support federated authentication mechanisms, storage discovery and job description templates, enabling user communities to run specific use cases. We developed the portal framework within the Italian Grid Infrastructure where the major national user representatives drove its design, the implemented solutions and its validation by testing some specific use cases.
Archive | 2011
Riccardo Zappi; Elisabetta Ronchieri; Alberto Forti; Antonia Ghiselli
In production data Grids, high performance disk storage solutions using parallel file systems are becoming increasingly important to provide reliability and high speed I/O operations needed by High Energy Physics analysis farms. Today, Storage Area Network solutions are commonly deployed at Large Hadron Collider data centres, and parallel file systems such as GPFS and Lustre provide reliable, high speed native POSIX I/O operations in parallel fashion. In this paper, we describe StoRM, a Grid middleware component, implementing the standard Storage Resource Manager v2.2 interface. Its architecture fully exploits the potentiality offered by the underlying cluster file system. Indeed, it enables and encourages the use of the native POSIX file protocol (i.e. ”file://”) allowing managed Storage Element to improve job efficiency in data accessing. The job running on the worker node can perform a direct access to the Storage Element managed by StoRM as if it were a local disk, instead of transferring data from Storage Elements to the local disk.
international parallel and distributed processing symposium | 2009
Marco Bencivenni; M. Canaparo; F. Capannini; L. Carota; M. Carpene; Alessandro Cavalli; Andrea Ceccanti; M. Cecchi; Daniele Cesini; Andrea Chierici; V. Ciaschini; A. Cristofori; S Dal Pra; Luca dell'Agnello; D De Girolamo; Massimo Donatelli; D. N. Dongiovanni; Enrico Fattibene; T. Ferrari; A Ferraro; Alberto Forti; Antonia Ghiselli; Daniele Gregori; G. Guizzunti; Alessandro Italiano; L. Magnoni; B. Martelli; Mirco Mazzucato; Giuseppe Misurelli; Michele Onofri
The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP community leveraging on broader initiatives (e.g. EGEE in Europe, OSG in northen America) as a framework to exchange and maintain data storage and provide computing infrastructure for the entire LHC community. INFN-CNAF in Bologna hosts the Italian Tier-1 site, which represents the biggest italian center in the WLCG distributed computing. In the first part of this paper we will describe on the building of the Italian Tier-1 to cope with the WLCG computing requirements focusing on some peculiarities; in the second part we will analyze the INFN-CNAF contribution for the developement of the grid middleware, stressing in particular the characteristics of the Virtual Organization Membership Service (VOMS), the de facto standard for authorization on a grid, and StoRM, an implementation of the Storage Resource Manager (SRM) specifications for POSIX file systems. In particular StoRM is used at INFN-CNAF in conjunction with General Parallel File System (GPFS) and we are also testing an integration with Tivoli Storage Manager (TSM) to realize a complete Hierarchical Storage Management (HSM).