Alessandro De Salvo
University of Cambridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alessandro De Salvo.
Journal of Physics: Conference Series | 2010
Michele Michelotto; Manfred Alef; Alejandro Iribarren; H. Meinhard; Peter Wegner; Martin Bly; G. Benelli; Franco Brasolin; Hubert Degaudenzi; Alessandro De Salvo; Ian Gable; A. Hirstius; P. Hristov
The SPEC[1] CINT benchmark has been used as a performance reference for computing in the HEP community for the past 20 years. The SPECint_base2000 (SI2K) unit of performance has been used by the major HEP experiments both in the Computing Technical Design Report for the LHC experiments and in the evaluation of the Computing Centres. At recent HEPiX[3] meetings several HEP sites have reported disagreements between actual machine performances and the scores reported by SPEC. Our group performed a detailed comparison of Simulation and Reconstruction code performances from the four LHC experiments in order to find a successor to the SI2K benchmark. We analyzed the new benchmarks from SPEC CPU2006 suite, both integer and floating point, in order to find the best agreement with the HEP code behaviour, with particular attention paid to reproducing the actual environment of HEP farm i.e., each job running independently on each core, and matching compiler, optimization, percentage of integer and floating point operations, and ease of use.
Journal of Physics: Conference Series | 2010
Alessandro De Salvo; Franco Brasolin
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
2007 Computing in High Energy and Nuclear Physics (CHEP 07) | 2008
Alessandro De Salvo; Alessandro Barchiesi; Kondo Gnanvo; Carl Gwilliam; J. Kennedy; G. Krobath; Andrzej Olszewski; Grigory Rybkine
The huge amount of resources available in the Grids, and the necessity to have the most updated experiment software deployed in all the sites within a few hours, have spotted the need for automatic installation systems for the LHC experiments. In this paper we describe the ATLAS system for the experiment software installation in LCG/EGEE, based on the Lightweight Job Submission Framework for Installation (LJSFi). This system is able to automatically discover, check, install, test and tag the full set of resources made available in LCG/EGEE to the ATLAS Virtual Organization in a few hours, depending on the site availability. The installations or removals may be centrally triggered as well as requested by the end-users for each site. A fallback solution to the manual operations is also available, in case of problems. The installation data, status and job history are centrally kept in the installation DB and browseable via a web interface. The installation tasks are performed by one or more automatic agents. The ATLAS installation team is automatically notified in case of problems, in order to proceed with the manual operations. Each user may browse or request an installation activity in a site, directly by accessing the web pages, being identified by his personal certificate. This system has been successfully used by ATLAS since 2003 to deploy about 60 different software releases and has performed more than 75000 installation jobs so far. The LJSFi framework is currently being extended to the other ATLAS Grids (NorduGrid and OSG).