Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Andreetto is active.

Publication


Featured researches published by Paolo Andreetto.


Journal of Physics: Conference Series | 2008

The gLite workload management system

Paolo Andreetto; Sergio Andreozzi; G Avellino; S Beco; A Cavallini; M Cecchi; V. Ciaschini; A Dorise; Francesco Giacomini; A. Gianelle; U Grandinetti; A Guarise; A Krop; R Lops; Alessandro Maraschini; V Martelli; Moreno Marzolla; M Mezzadri; E Molinari; Salvatore Monforte; F Pacini; M Pappalardo; A Parrini; G Patania; L. Petronzio; R Piro; M Porciani; F Prelz; D Rebatto; E Ronchieri

The gLite Workload Management System (WMS) is a collection of components that provide the service responsible for distributing and managing tasks across computing and storage resources available on a Grid. The WMS basically receives requests of job execution from a client, finds the required appropriate resources, then dispatches and follows the jobs until completion, handling failure whenever possible. Other than single batch-like jobs, compound job types handled by the WMS are Directed Acyclic Graphs (a set of jobs where the input/output/execution of one of more jobs may depend on one or more other jobs), Parametric Jobs (multiple jobs with one parametrized description), and Collections (multiple jobs with a common description). Jobs are described via a flexible, high-level Job Definition Language (JDL). New functionality was recently added to the system (use of Service Discovery for obtaining new service endpoints to be contacted, automatic sandbox files archival/compression and sharing, support for bulk-submission and bulk-matchmaking). Intensive testing and troubleshooting allowed to dramatically increase both job submission rate and service stability. Future developments of the gLite WMS will be focused on reducing external software dependency, improving portability, robustness and usability.


Future Generation Computer Systems | 2010

Design and implementation of the gLite CREAM job management service

Cristina Aiftimiei; Paolo Andreetto; Sara Bertocco; Simone Dalla Fina; Alvise Dorigo; Eric Frizziero; A. Gianelle; Moreno Marzolla; Mirco Mazzucato; Massimo Sgaravatto; Sergio Traldi; Luigi Zangrando

Job execution and management is one of the most important functionalities provided by every modern Grid systems. In this paper we describe how the problem of job management has been addressed in the gLite middleware by means of the CREAM and CEMonitor services. CREAM (Computing Resource Execution and Management) provides a job execution and management capability for Grids, while CEMonitor is a general purpose asynchronous event notification framework. Both components expose a Web Service interface allowing conforming clients to submit, manage and monitor computational jobs to a Local Resource Management System.


international conference on e science | 2007

Open Standards-Based Interoperability of Job Submission and Management Interfaces across the Grid Middleware Platforms gLite and UNICORE

Moreno Marzolla; Paolo Andreetto; Valerio Venturi; Andrea Ferraro; S. Memon; B. Twedell; Morris Riedel; Daniel Mallmann; Achim Streit; S. van de Berghe; V. Li; David Snelling; Katerina Stamou; Zeeshan Ali Shah; Fredrik Hedman

In a distributed grid environment with ambitious service demands the job submission and management interfaces provide functionality of major importance. Emerging e-science and grid infrastructures such as EGEE and DEISA rely on highly available services that are capable of managing scientific jobs. It is the adoption of emerging open standard interfaces which allows the distribution of grid resources in such a way that their actual service implementation or grid technologies are not isolated from each other, especially when these resources are deployed in different e-science infrastructures that consist of different types of computational resources. This paper motivates the interoperability of these infrastructures and discusses solutions. We describe the adoption of various open standards that recently emerged from the open grid forum (OGF) in the field of job submission and management by well-known grid technologies, respectively gLite and UNICORE. This has a fundamental impact on the interoperability between these technologies and thus within the next generation e-science infrastructures that rely on these technologies.


Journal of Physics: Conference Series | 2008

Job submission and management through web services: the experience with the CREAM service

Cristina Aiftimiei; Paolo Andreetto; Sara Bertocco; Simone Dalla Fina; S D Ronco; Alvise Dorigo; A. Gianelle; Moreno Marzolla; Mirco Mazzucato; Massimo Sgaravatto; M Verlato; Luigi Zangrando; M Corvo; V Miccio; A Sciaba; D Cesini; D Dongiovanni; C Grandi

Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.


Journal of Grid Computing | 2010

Standards-Based Job Management in Grid Systems

Paolo Andreetto; Sergio Andreozzi; Antonia Ghiselli; Moreno Marzolla; Valerio Venturi; Luigi Zangrando

The Grid paradigm for accessing heterogeneous distributed resources proved to be extremely effective, as many organizations are relying on Grid middlewares for their computational needs. Many different middlewares exist, the result being a proliferation of self-contained, non interoperable “Grid islands”. This means that different Grids, based on different middlewares, cannot share resources, e.g. jobs submitted on one Grid cannot be forwarded for execution on another one. To address this problem, standard interfaces are being proposed for some of the important functionalities provided by most Grids, namely job submission and management, authorization and authentication, resource modeling, and others. In this paper we review some recent standards which address interoperability for three types of services: the BES/JSDL specifications for job submission and management, the SAML notation for authorization and authentication, and the GLUE specification for resource modeling. We describe how standards-enhanced Grid components can be used to create interoperable building blocks for a Grid architecture. Furthermore, we describe how existing components from the gLite middleware have been re-engineered to support BES/JSDL, GLUE and SAML. From this experience we draw some conclusions on the strengths and weaknesses of these specifications, and how they can be improved.


international conference on parallel and distributed systems | 2007

Enhanced resource management capabilities using standardized job management and data access interfaces within UNICORE Grids

Mohammad Shahbaz Memon; Ahmed Shiraz Memon; Morris Riedel; B. Schuller; S. van de Berghe; David Snelling; V. Li; Moreno Marzolla; Paolo Andreetto

Many existing Grid technologies and resource management systems lack a standardized job submission interface in Grid environments or e-Infrastructures. Even if the same language for job description is used, often the interface for job submission is also different in each of these technologies. The evolvement of the standardized Job Submission and Description Language (JSDL) as well as the OGSA - Basic Execution Services (OGSA-BES) pave the way to improve the interoperability of all these technologies enabling cross-Grid job submission and better resource management capabilities. In addition, the BytelO standards provide useful mechanisms for data access that can be used in conjunction with these improved resource management capabilities. This paper describes the integration of these standards into the recently released UNICORE 6 Grid middleware that is based on open standards such as the Web Services Resource Framework (WS-RF) and WS-Addressing (WS-A).


Journal of Physics: Conference Series | 2012

New developments in the CREAM Computing Element

Paolo Andreetto; Sara Bertocco; Fabio Capannini; Marco Cecchi; Alvise Dorigo; Eric Frizziero; A. Gianelle; Massimo Mezzadri; Salvatore Monforte; Francesco Prelz; David Rebatto; Massimo Sgaravatto; Luigi Zangrando

The EU-funded project EMI aims at providing a unified, standardized, easy to install software for distributed computing infrastructures. CREAM is one of the middleware products part of the EMI middleware distribution: it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management systems. In this paper we discuss about some new features being implemented in the CREAM Computing Element. The implementation of the EMI Execution Service (EMI-ES) specification (an agreement in the EMI consortium on interfaces and protocols to be used in order to enable computational job submission and management required across technologies) is one of the new functions being implemented. New developments are also focusing in the High Availability (HA) area, to improve performance, scalability, availability and fault tolerance.


Journal of Physics: Conference Series | 2010

Using CREAM and CEMonitor for job submission and management in the gLite middleware

Cristina Aiftimiei; Paolo Andreetto; Sara Bertocco; S Dalla Fina; Alvise Dorigo; Eric Frizziero; A. Gianelle; Moreno Marzolla; Mirco Mazzucato; P. Mendez Lorenzo; V Miccio; Massimo Sgaravatto; Sergio Traldi; Luigi Zangrando

In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.


Proceedings of International Symposium on Grids and Clouds (ISGC) 2017 — PoS(ISGC2017) | 2017

The "Cloud Area Padovana": Lessons Learned after Two Years of a Production OpenStack-based IaaS for the Local INFN User Community

Marco Verlato; Paolo Andreetto; Fabrizio Chiarello; Fulvia Costa; Alberto Crescente; Alvise Dorigo; Sergio Fantinel; Federica Fanzago; Ervin Konomi; Matteo Segatta; Massimo Sgaravatto; Sergio Traldi; Nicola Tritto; Lisa Zangrando

The Cloud Area Padovana is an OpenStack-based scientific cloud, spread across two different sites - the INFN Padova Unit and the INFN Legnaro National Labs - located 10 km away but connected with a dedicated 10 Gbps optical link. In the last two years its hardware resources have been scaled horizontally by adding new ones: currently it provides about 1100 logical cores and 50 TB of storage. Special in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for Single Sign-On via the INFN-AAI Identity Provider as a new option for the user authentication. The collaboration with the EU-funded INDIGO-DataCloud project, started one year ago, allowed to experiment the integration of Docker-based containers and the fair-share scheduling: a new resource allocation mechanism analogous to the ones available in the batch system schedulers for maximizing the usage of shared resources among concurrent users and projects. Both solutions are expected to be available in production soon. The entire computing facility now satisfies the computational and storage demands of more than 100 users afferent to about 30 research projects. In this paper we’ll present the architecture of the Cloud infrastructure, the tools and procedures used to operate it ensuring reliability and fault-tolerance. We’ll especially focus on the lessons learned in these two years, describing the challenges identified and the subsequent corrective actions applied. From the perspective of scientific applications, we’ll show some concrete use cases on how this Cloud infrastructure is being used. In particular we’ll focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, initially based only on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES is using a client-server system called TraceWin to exploit INFNs virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.


Journal of Physics: Conference Series | 2012

CREAM Computing Element: a status update

Paolo Andreetto; Sara Bertocco; Fabio Capannini; Marco Cecchi; Alvise Dorigo; Eric Frizziero; A. Gianelle; Massimo Mezzadri; Salvatore Monforte; Francesco Prelz; David Rebatto; Massimo Sgaravatto; Luigi Zangrando

The European Middleware Initiative (EMI) project aims to deliver a consolidated set of middleware products based on the four major middleware providers in Europe -ARC, dCache, gLite and UNICORE. The CREAM (Computing Resource Execution And Management) Service, a service for job management operation at the Computing Element (CE) level, is a software product which is part of the EMI middleware distribution. In this paper we discuss about some new functionality in the CREAM CE introduced with the first EMI major release (EMI-1, codename Kebnekaise). The integration with the Argus authorization service is one of these implementations: the use of a unique authorization system, besides simplifying the overall management, allows also to avoid inconsistent authorization decisions. An improved support for complex deployment scenarios (e.g. for sites having multiple CE head nodes and/or having heterogeneous resources) is another new achievement. The improved support for resource allocation in a multi-core environment, and the initial support of version 2.0 of the Glue specification for resource publication are other new functionalities introduced with the first EMI release.

Collaboration


Dive into the Paolo Andreetto's collaboration.

Top Co-Authors

Avatar

A. Gianelle

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Alvise Dorigo

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Luigi Zangrando

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Massimo Sgaravatto

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sara Bertocco

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Eric Frizziero

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Morris Riedel

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar

Cristina Aiftimiei

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Mirco Mazzucato

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Researchain Logo
Decentralizing Knowledge