Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vincenzo Ciaschini is active.

Publication


Featured researches published by Vincenzo Ciaschini.


Archive | 2004

Practical approaches to Grid workload and resource management in the EGEE project

P. Andreetto; Daniel Kouřil; Valentina Borgia; Aleš Křenek; A. Dorigo; Luděk Matyska; A. Gianelle; Miloš Mulač; M. Mordacchini; Jan Pospíšil; Massimo Sgaravatto; Miroslav Ruda; L. Zangrando; Zdeněk Salvet; S. Andreozzi; Jiří Sitera; Vincenzo Ciaschini; Jiří Škrabal; C. Di Giusto; Michal Voců; Francesco Giacomini; V. Martelli; V. Medici; Massimo Mezzadri; Elisabetta Ronchieri; Francesco Prelz; V. Venturi; D. Rebatto; Giuseppe Avellino; Salvatore Monforte

Resource management and scheduling of distributed, data-driven applications in a Grid environment are challenging problems. Although significant results were achieved in the past few years, the development and the proper deployment of generic, reliable, standard components present issues that still need to be completely solved. Interested domains include workload management, resource discovery, resource matchmaking and brokering, accounting, authorization policies, resource access, reliability and dependability. The evolution towards a service-oriented architecture, supported by emerging standards, is another activity that will demand attention. All these issues are being tackled within the EU-funded EGEE project (Enabling Grids for E-science in Europe), whose primary goals are the provision of robust middleware components and the creation of a reliable and dependable Grid infrastructure to support e-Science applications. In this paper we present the plans and the preliminary activities aiming at providing adequate workload and resource management components, suitable to be deployed in a production-quality Grid.


grid computing | 2009

Definition and Implementation of a SAML-XACML Profile for Authorization Interoperability Across Grid Middleware in OSG and EGEE

G. Garzoglio; Ian D. Alderman; Mine Altunay; Rachana Ananthakrishnan; Joe Bester; Keith Chadwick; Vincenzo Ciaschini; Yuri Demchenko; Andrea Ferraro; Alberto Forti; D.L. Groep; Ted Hesselroth; John Hover; Oscar Koeroo; Chad La Joie; Tanya Levshina; Zach Miller; Jay Packard; Håkon Sagehaug; Valery Sergeev; I. Sfiligoi; N Sharma; Frank Siebenlist; Valerio Venturi; John Weigand

In order to ensure interoperability between middleware and authorization infrastructures used in the Open Science Grid (OSG) and the Enabling Grids for E-science (EGEE) projects, an Authorization Interoperability activity was initiated in 2006. The interoperability goal was met in two phases: firstly, agreeing on a common authorization query interface and protocol with an associated profile that ensures standardized use of attributes and obligations; and secondly implementing, testing, and deploying on OSG and EGEE, middleware that supports the interoperability protocol and profile. The activity has involved people from OSG, EGEE, the Globus Toolkit project, and the Condor project. This paper presents a summary of the agreed-upon protocol, profile and the software components involved.


Journal of Grid Computing | 2004

Authentication and Authorization Mechanisms for Multi-domain Grid Environments

Linda Cornwall; Jens Jensen; David Kelsey; Ákos Frohner; Daniel Kouřil; Franck Bonnassieux; Sophie Nicoud; Károly Lorentey; Joni Hahkala; Mika Silander; Roberto Cecchini; Vincenzo Ciaschini; Luca dell'Agnello; Fabio Spataro; David O'Callaghan; Olle Mulmo; Gian Luca Volpato; D.L. Groep; Martijn Steenbakkers; A. McNab

This article discusses the authentication and the authorization aspects of security in grid environments spanning multiple administrative domains. Achievements in these areas are presented using the EU DataGrid project as an example implementation. It also gives an outlook on future directions of development.


international conference on e science | 2007

Virtual Organization Management Across Middleware Boundaries

Valerio Venturi; Federico Stagni; Alberto Gianoli; Andrea Ceccanti; Vincenzo Ciaschini

One of the most important challenges in production grids is to achieve interoperation across several heterogeneous grid middleware platforms: escience applications need a coordinated resource sharing among dynamic collections of individuals/institutions, independently from whatever middleware the resources are running. For this reason, there is a great effort going on to define standard interfaces, in order to implement common services that can be used to achieve cross-middlewares interoperability. In this paper, we present our modifications to the virtual organization management service (VOMS), a widely-known and used tool that acts as an attribute authority. We enhanced VOMS to expose the standardized interface of the Security Assertion Markup Language (SAML), and therefore to release SAML assertions. This way we want VOMS to be available on the larger possible number of grid middleware platforms.


Archive | 2004

Distributed Tracking, Storage, and Re-use of Job State Information on the Grid

Daniel Kouřil; Aleš Křenek; Luděk Matyska; Miloš Mulač; Jan Pospíšil; Miroslav Ruda; Zdeněk Salvet; Jiří Sitera; Jiří Škrabal; Michal Voců; P. Andreetto; Valentina Borgia; A. Dorigo; A. Gianelle; M. Mordacchini; Massimo Sgaravatto; L. Zangrando; S. Andreozzi; Vincenzo Ciaschini; C. Di Giusto; Francesco Giacomini; V. Medici; Elisabetta Ronchieri; Giuseppe Avellino; Stefano Beco; Alessandro Maraschini; Fabrizio Pacini; Annalisa Terracina; Andrea Guarise; G. Patania

The Logging and Bookkeeping service tracks jobs passing through the Grid. It collects important events generated by both the grid middleware components and applications, and processes them at a chosen LB server to provide the job state. The events are transported through secure and reliable channels. Job tracking is fully distributed and does not depend on a single information source, the robustness is achieved through speculative job state computation in case of reordered, delayed or lost events. The state computation is easily adaptable to modified job control flow.


International Conference on Computing in High Energy and Nuclear Physics 2012, CHEP 2012 | 2012

Testing and evaluating storage technology to build a distributed Tier1 for SuperB in Italy

S. Pardi; A. Fella; F. Bianchi; Vincenzo Ciaschini; Marco Corvo; Domenico Delprete; A. Di Simone; G. Donvito; F. Giacomini; A. Gianoli; S. Longo; S. Luitz; E. Luppi; Matteo Manzali; A. Perez; M. Rama; G. Russo; B. Santeramo; R. Stroili; L. Tomassetti

The SuperB asymmetric energy e+e−- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−-1 and a luminosity target of 1036 cm−-2 s−-1. This luminosity translate in the requirement of storing more than 50 PByte of additional data each year, making SuperB an interesting challenge to the data management infrastructure, both at site level as at Wide Area Network level. A new Tier1, distributed among 3 or 4 sites in the south of Italy, is planned as part of the SuperB computing infrastructure. Data storage is a relevant topic whose development affects the way to configure and setup storage infrastructure both in local computing cluster and in a distributed paradigm. In this work we report the test on the software for data distribution and data replica focusing on the experiences made with Hadoop and GlusterFS.


Proceedings of International Symposium on Grids and Clouds (ISGC) 2016 — PoS(ISGC 2016) | 2017

Elastic CNAF DataCenter extension via opportunistic resources

Tommaso Boccali; Stefano Dal Pra; Vincenzo Ciaschini; Luca dell'Agnello; Andrea Chierici; Donato Di Girolamo; Vladimir Sapunenko; Alessandro Italiano

The Computing facility CNAF, in Bologna (Italy), is the biggest WLCG Computing Center in Italy, and serves all WLCG Experiments plus more than 20 non-WLCG Virtual Organizations and currently deploys more than 200 kHS06 of Computing Power and more than 20 PB of Disk and 40 PB of tape via a GPFS SAN. The Center has started a program to evaluate the possibility to extend its resources on external entities, either commercial or opportunistic or simply remote, in order to be prepared for future upgrades or temporary burst in the activity from experiments. The approach followed is meant to be completely transparent to users, with additional external resources directly added to the CNAF LSF batch system; several variants are possible, like the use of VPN tunnels in order to establish LSF communications between hosts, a multi-master LSF approach, or in the longer term the use of HTCondor. Concerning the storage, the simplest approach is to use Xrootd fallback to CNAF storage, unfortunately viable only for some experiments; a more transparent approach involves the use of GPFS/AFM module in order to cache files directly on the remote facilities. In this paper we focus on the technical aspects of the integration, and assess the difficulties using different remote virtualisation technologies, as made available at different sites. A set of benchmarks is provided in order to allow for an evaluation of the solution for CPU and Data intensive workflows. The evaluation of Aruba as a resource provider for CNAF is under test, with limited available resources; a ramp up to a larger scale is being discussed. On a parallel path, this paper shows a similar attempt of extension using proprietary resources, at ReCaS-Bari; the chosen solution is simpler in the setup, but shares many commonalities.


Proceedings of International Symposium on Grids and Clouds (ISGC) 2016 — PoS(ISGC 2016) | 2017

Elastic Computing from Grid sites to External Clouds

Giuseppe Codispoti; Riccardo Di Maria; Cristina Aiftimiei; D. Bonacorsi; Patrizia Calligola; Vincenzo Ciaschini; Alessandro Costantini; Stefano Dal Pra; Claudio Grandi; Diego Michelotto; Matteo Panella; Gianluca Peco; Vladimir Sapunenko; Massimo Sgaravatto; Sonia Taneja; Giovanni Zizzi; Donato De Girolamo

LHC experiments are now in Run-II data taking and approaching new challenges in the operation of the computing facilities in future Runs. Despite having demonstrated to be able to sustain operations at scale during Run-I, it has become evident that the computing infrastructure for RunII already is dimensioned to cope at most with the average amount of data recorded, and not for peak usage. The latter are frequent and may create large backlogs and have a direct impact on data reconstruction completion times, hence to data availability for physics analysis. Among others, the CMS experiment is exploring (since the first Long Shutdown period after Run-I) the access and utilisation of Cloud resources provided by external partners or commercial providers. In this work we present proof of concepts of the elastic extension of a CMS Tier-3 site in Bologna (Italy), on an external OpenStack infrastructure. We start from presenting the experience on a first work on the “Cloud Bursting” of a CMS Grid site using a novel LSF configuration to dynamically register new worker nodes. Then, we move to an even more recent work on a “Cloud Site as-aService” prototype, based on a more direct access/integration of OpenStack resources into the CMS workload management system. Results with real CMS workflows and future plans are also presented and discussed.


Journal of Physics: Conference Series | 2015

Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

Vincenzo Ciaschini; Stefano Dal Pra; Luca dell'Agnello

The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.


Proceedings of 36th International Conference on High Energy Physics — PoS(ICHEP2012) | 2013

Computing at SuperB

Domenico del Prete; Fabrizio Bianchi; Vania Boccia; Vincenzo Ciaschini; M. Corvo; Guglielmo De Nardo; Andrea Di Simone; Giacinto Donvito; Armando Fella; Paolo Franchini; Francesco Giacomini; Alberto Gianoli; Giuliano Laccetti; Stefano Longo; Steffen Luitz; E. Luppi; Matteo Manzali; Leonardo Merola; S. Pardi; Alejandro Perez; M. Rama; G. Russo; Bruno Santeramo; R. Stroili; Luca Tommasetti; Infn Bari

Domenico Del Prete*, Fabrizio Bianchi, Vania Boccia, Vincenzo Ciaschini, Marco Corvo, Guglielmo De Nardo, Andrea Di Simone, Giacinto Donvito, Armando Fella, Paolo Franchini, Francesco Giacomini, Alberto Gianoli, Giuliano Laccetti, Stefano Longo, Steffen Luitz, Eleonora Luppi, Matteo Manzali, Leonardo Merola, Silvio Pardi, Alejandro Perez, Matteo Rama, Guido Russo, Bruno Santeramo, Roberto Stroili, Luca Tommasetti

Collaboration


Dive into the Vincenzo Ciaschini's collaboration.

Top Co-Authors

Avatar

E. Luppi

University of Ferrara

View shared research outputs
Top Co-Authors

Avatar

Alberto Gianoli

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Francesco Giacomini

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

G. Russo

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Stroili

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

S. Pardi

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alejandro Perez

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar

Bruno Santeramo

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Researchain Logo
Decentralizing Knowledge