Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Gianelle is active.

Publication


Featured researches published by A. Gianelle.


Journal of Physics: Conference Series | 2008

The gLite workload management system

Paolo Andreetto; Sergio Andreozzi; G Avellino; S Beco; A Cavallini; M Cecchi; V. Ciaschini; A Dorise; Francesco Giacomini; A. Gianelle; U Grandinetti; A Guarise; A Krop; R Lops; Alessandro Maraschini; V Martelli; Moreno Marzolla; M Mezzadri; E Molinari; Salvatore Monforte; F Pacini; M Pappalardo; A Parrini; G Patania; L. Petronzio; R Piro; M Porciani; F Prelz; D Rebatto; E Ronchieri

The gLite Workload Management System (WMS) is a collection of components that provide the service responsible for distributing and managing tasks across computing and storage resources available on a Grid. The WMS basically receives requests of job execution from a client, finds the required appropriate resources, then dispatches and follows the jobs until completion, handling failure whenever possible. Other than single batch-like jobs, compound job types handled by the WMS are Directed Acyclic Graphs (a set of jobs where the input/output/execution of one of more jobs may depend on one or more other jobs), Parametric Jobs (multiple jobs with one parametrized description), and Collections (multiple jobs with a common description). Jobs are described via a flexible, high-level Job Definition Language (JDL). New functionality was recently added to the system (use of Service Discovery for obtaining new service endpoints to be contacted, automatic sandbox files archival/compression and sharing, support for bulk-submission and bulk-matchmaking). Intensive testing and troubleshooting allowed to dramatically increase both job submission rate and service stability. Future developments of the gLite WMS will be focused on reducing external software dependency, improving portability, robustness and usability.


Archive | 2004

Practical approaches to Grid workload and resource management in the EGEE project

P. Andreetto; Daniel Kouřil; Valentina Borgia; Aleš Křenek; A. Dorigo; Luděk Matyska; A. Gianelle; Miloš Mulač; M. Mordacchini; Jan Pospíšil; Massimo Sgaravatto; Miroslav Ruda; L. Zangrando; Zdeněk Salvet; S. Andreozzi; Jiří Sitera; Vincenzo Ciaschini; Jiří Škrabal; C. Di Giusto; Michal Voců; Francesco Giacomini; V. Martelli; V. Medici; Massimo Mezzadri; Elisabetta Ronchieri; Francesco Prelz; V. Venturi; D. Rebatto; Giuseppe Avellino; Salvatore Monforte

Resource management and scheduling of distributed, data-driven applications in a Grid environment are challenging problems. Although significant results were achieved in the past few years, the development and the proper deployment of generic, reliable, standard components present issues that still need to be completely solved. Interested domains include workload management, resource discovery, resource matchmaking and brokering, accounting, authorization policies, resource access, reliability and dependability. The evolution towards a service-oriented architecture, supported by emerging standards, is another activity that will demand attention. All these issues are being tackled within the EU-funded EGEE project (Enabling Grids for E-science in Europe), whose primary goals are the provision of robust middleware components and the creation of a reliable and dependable Grid infrastructure to support e-Science applications. In this paper we present the plans and the preliminary activities aiming at providing adequate workload and resource management components, suitable to be deployed in a production-quality Grid.


Future Generation Computer Systems | 2010

Design and implementation of the gLite CREAM job management service

Cristina Aiftimiei; Paolo Andreetto; Sara Bertocco; Simone Dalla Fina; Alvise Dorigo; Eric Frizziero; A. Gianelle; Moreno Marzolla; Mirco Mazzucato; Massimo Sgaravatto; Sergio Traldi; Luigi Zangrando

Job execution and management is one of the most important functionalities provided by every modern Grid systems. In this paper we describe how the problem of job management has been addressed in the gLite middleware by means of the CREAM and CEMonitor services. CREAM (Computing Resource Execution and Management) provides a job execution and management capability for Grids, while CEMonitor is a general purpose asynchronous event notification framework. Both components expose a Web Service interface allowing conforming clients to submit, manage and monitor computational jobs to a Local Resource Management System.


Journal of Physics: Conference Series | 2008

Job submission and management through web services: the experience with the CREAM service

Cristina Aiftimiei; Paolo Andreetto; Sara Bertocco; Simone Dalla Fina; S D Ronco; Alvise Dorigo; A. Gianelle; Moreno Marzolla; Mirco Mazzucato; Massimo Sgaravatto; M Verlato; Luigi Zangrando; M Corvo; V Miccio; A Sciaba; D Cesini; D Dongiovanni; C Grandi

Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.


Archive | 2004

Distributed Tracking, Storage, and Re-use of Job State Information on the Grid

Daniel Kouřil; Aleš Křenek; Luděk Matyska; Miloš Mulač; Jan Pospíšil; Miroslav Ruda; Zdeněk Salvet; Jiří Sitera; Jiří Škrabal; Michal Voců; P. Andreetto; Valentina Borgia; A. Dorigo; A. Gianelle; M. Mordacchini; Massimo Sgaravatto; L. Zangrando; S. Andreozzi; Vincenzo Ciaschini; C. Di Giusto; Francesco Giacomini; V. Medici; Elisabetta Ronchieri; Giuseppe Avellino; Stefano Beco; Alessandro Maraschini; Fabrizio Pacini; Annalisa Terracina; Andrea Guarise; G. Patania

The Logging and Bookkeeping service tracks jobs passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen LB server to provide the job state. The events are transported through secure and reliable channels. Job tracking is fully distributed and does not depend on a single information source, the robustness is achieved through speculative job state computation in case of reordered, delayed or lost events. The state computation is easily adaptable to modified job control flow.


nuclear science symposium and medical imaging conference | 2013

Applications of many-core technologies to on-line event reconstruction in High Energy Physics experiments

A. Gianelle; S. Amerio; D. Bastieri; M. Corvo; W. Ketchum; T. Liu; Alessandro Lonardo; Donatella Lucchesi; S. Poprocki; R. Rivera; Laura Tosoratto; P. Vicini; P. Wittich

Interest in many-core architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of many-core devices when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction - the SVT algorithm - as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. We measure performance of different architectures (Intel Xeon Phi and AMD GPUs, in addition to NVidia GPUs) and different software environments (OpenCL, in addition to NVidia CUDA). Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the many-core devices.


Journal of Physics: Conference Series | 2012

New developments in the CREAM Computing Element

Paolo Andreetto; Sara Bertocco; Fabio Capannini; Marco Cecchi; Alvise Dorigo; Eric Frizziero; A. Gianelle; Massimo Mezzadri; Salvatore Monforte; Francesco Prelz; David Rebatto; Massimo Sgaravatto; Luigi Zangrando

The EU-funded project EMI aims at providing a unified, standardized, easy to install software for distributed computing infrastructures. CREAM is one of the middleware products part of the EMI middleware distribution: it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management systems. In this paper we discuss about some new features being implemented in the CREAM Computing Element. The implementation of the EMI Execution Service (EMI-ES) specification (an agreement in the EMI consortium on interfaces and protocols to be used in order to enable computational job submission and management required across technologies) is one of the new functions being implemented. New developments are also focusing in the High Availability (HA) area, to improve performance, scalability, availability and fault tolerance.


Journal of Physics: Conference Series | 2010

Using CREAM and CEMonitor for job submission and management in the gLite middleware

Cristina Aiftimiei; Paolo Andreetto; Sara Bertocco; S Dalla Fina; Alvise Dorigo; Eric Frizziero; A. Gianelle; Moreno Marzolla; Mirco Mazzucato; P. Mendez Lorenzo; V Miccio; Massimo Sgaravatto; Sergio Traldi; Luigi Zangrando

In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.


Journal of Physics: Conference Series | 2017

First experiences with a parallel architecture testbed in the LHCb trigger system

Stefano Gallorini; Silvia Amerio; Donatella Lucchesi; M. Corvo; A. Gianelle

In this note we will discuss the application of new technologies, such as GPU cards, in the current LHCb trigger system. During Run2, a node equipped with a GPU has been inserted in the LHCb online monitoring system. During normal data taking, real events have been sent to the node and processed by GPU-based and CPU-based tracking algorithms. This gave us the unique opportunity to test the new hardware and the new algorithms in the real-time environment of the experiment. In the following sections, we will describe the algorithm developed for parallel architectures, the setup of the testbed and the results compared to the LHCb official reconstruction.


nuclear science symposium and medical imaging conference | 2012

A parallel framework for the SuperB super flavor factory

Stefano Longo; Fabrizio Bianchi; Vincenzo Ciaschini; M. Corvo; Domenico Delprete; Andrea Di Simone; Giacinto Donvito; Armando Fella; Paolo Franchini; Francesco Giacomini; A. Gianelle; Alberto Gianoli; Steffen Luitz; E. Luppi; Matteo Manzali; S. Pardi; Alejandro Perez; M. Rama; G. Russo; Bruno Santeramo; R. Stroili; L. Tomassetti

The SuperB asymmetric energy e+e- collider and detector [1] to be built at the newly founded Nicola Cabibbo Lab [2] will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a peak luminosity of 1036 cm-2s-1. These parameters require a substantial growth in computing requirements and performances: we roughly estimate that in few years of operations we will have to cope with near half EB of raw data and that the CPU required for processing will be close to 6000 KHep-Spec06 per year. The SuperB collaboration is thus investigating the advantages of new CPU architectures (multi and many cores, now largely available on the market), with the aim to be able to treat this amount of data both efficiently and within reasonable amounts of time. At the same time the collaboration is analyzing the current software, in large part inherited from previous experiments (mainly BaBar), to understand the underlying parallelism level, how to exploit it and to find the better mapping to emergent hardware architectures. In this work we first present the measurements done on the analysis and simulation software, then the Framework architecture we are designing. We complete the presentation with a description of our Framework prototype and some preliminary performance measurements.

Collaboration


Dive into the A. Gianelle's collaboration.

Top Co-Authors

Avatar

Massimo Sgaravatto

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Francesco Giacomini

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Paolo Andreetto

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Alvise Dorigo

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Luigi Zangrando

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Sara Bertocco

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Eric Frizziero

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Elisabetta Ronchieri

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Avellino

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

M. Corvo

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Researchain Logo
Decentralizing Knowledge