Andrea Petrucci
Istituto Nazionale di Fisica Nucleare
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Petrucci.
cluster computing and the grid | 2006
Eric Frizziero; Michele Gulmini; Francesco Lelli; Gaetano Maron; Alexander Oh; Salvatore Orlando; Andrea Petrucci; Silvano Squizzato; Sergio Traldi
Current grid technologies offer unlimited computational power and storage capacity for scientific research and business activities in heterogeneous areas over the world. Thanks to the grid, different virtual organizations can operate together in order to achieve common goals. However, concrete use cases demand a more close interaction between various types of instruments accessible from the grid, and the classical grid infrastructure, typically composed of computing and storage elements. We cope with this open problem by proposing and realizing the first release of the Instrument Element, i.e., a new grid component that provides the computational/data grid with an abstraction of real instruments, and grid users with a more interactive interface to control them. In this paper, we discuss in detail the proposed software architecture for this new component, then we report some performance results concerning its first prototype, and finally we present a pair of concrete use cases, which the Instrument Element has been successfully integrated with.
International Journal of Web and Grid Services | 2007
Francesco Lelli; Eric Frizziero; Michele Gulmini; Gaetano Maron; Salvatore Orlando; Andrea Petrucci; Silvano Squizzato
Current grid technologies offer unlimited computational power and storage capacity for scientific research and business activities in heterogeneous areas all over the world. Thanks to the grid, different virtual organisations can operate together in order to achieve common goals. However, concrete use cases demand a closer interaction between various types of instruments accessible from the grid on the one hand and the classical grid infrastructure, typically composed of Computing and Storage Elements, on the other. We cope with this open problem by proposing and realising the first release of the Instrument Element (IE), a new grid component that provides the computational/data grid with an abstraction of real instruments, and grid users with a more interactive interface to control them. In this paper we discuss in detail the implemented software architecture for this new component and we present concrete use cases where the IE has been successfully integrated.
Proceedings of Technology and Instrumentation in Particle Physics 2014 — PoS(TIPP2014) | 2015
Andrew Forrest; Tomasz Bawej; James G Branson; Ulf Behrens; Olivier Chaze; Sergio Cittolin; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Guillelmo Gomez-Ceballos; Robert Gomez-Reino; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri; Attila Racz; H. Sakulin
Tomasz Bawej, Ulf Behrens, James Branson, Olivier Chaze, Sergio Cittolin, Georgiana-Lavinia Darlea, Christian Deldicque, Marc Dobson, Aymeric Dupont, Samim Erhan, Andrew Forrest, Dominique Gigi, Frank Glege, Guillelmo Gomez-Ceballos, Robert Gomez-Reino, Jeroen Hegeman, Andre Holzner, Lorenzo Masetti, Frans Meijers, Emilio Meschi, Remigius K. Mommsen, Srecko Morovic, Carlos Nunez-Barranco-Fernandez, Vivian ODell, Luciano Orsini, Christoph Paus, Andrea Petrucci, Marco Pieri, Attila Racz, Hannes Sakulin, Christoph Schwick, Benjamin Stieger, Konstanty Sumorok, Jan Veverka, Petr Zejdl
IEEE Transactions on Nuclear Science | 2008
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jin Cheol Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri
The data acquisition system of the Compact Muon Solenoid experiment at the large hadron collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the trigger throttling system prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to backpressure from the down-stream event-building and processing. This paper reports on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors and discusses the ongoing commissioning of the full-scale system.
IEEE Transactions on Nuclear Science | 2008
Anzar Afaq; W. Badgett; Gerry Bauer; K. Biery; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Harry Cheung; Marek Ciganek; Sergio Cittolin; William Dagenhart; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; J. Gutleber; C. Jacobs; Jin Cheol Kim; M. Klute; Jim Kowalkowski; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; Esteban Gutierrez Mlot
The CMS data acquisition (DAQ) system relies on a purely software driven high level trigger (HLT) to reduce the full Level 1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the filter farm and the procedures to validate the filtering code within the DAQ environment are described.
ieee-npss real-time conference | 2007
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jin Cheol Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri
The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GB/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Myrinet will be used for the communication from the underground frontend devices to the surface event building system. Gigabit Ethernet is deployed in the surface event building system. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder preseries installation at CERN are presented and the problems of wormhole routing networks are discussed.
Proceedings of Topical Workshop on Electronics for Particle Physics — PoS(TWEPP-17) | 2018
Dominique Gigi; Petia Petrova; Attila Racz; Samuel Johan Orn; T. Reis; Christian Deldicque; Michail Vougioukas; Michael Lettrich; Cristian Contescu; E. Meschi; Ioannis Papakrivopoulos; M. Dobson; V. O'Dell; F. Glege; Maciej Gladki; Dainius Simelevicius; James G Branson; A. Holzner; H. Sakulin; Sergio Cittolin; Andrea Petrucci; F. Meijers; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot; C. Schwick; J. Fulcher; Jeroen Hegeman
In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new nFEROL40 card in the microTCA standard has been developed. The main function of the nFEROL40 is to acquire event data over multiple point-to-point serial optical links, provide nbuffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the nEthernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This ncontribution discusses the design of the FEROL40 and experience from operation
Proceedings of Topical Workshop on Electronics for Particle Physics — PoS(TWEPP-17) | 2018
Attila Racz; Petia Petrova; Dominique Gigi; Samuel Johan Orn; T. Reis; Christian Deldicque; Michail Vougioukas; Michael Lettrich; Cristian Contescu; E. Meschi; Ioannis Papakrivopoulos; M. Dobson; V. O'Dell; F. Glege; Maciej Gladki; Dainius Simelevicius; James G Branson; A. Holzner; H. Sakulin; Sergio Cittolin; Andrea Petrucci; F. Meijers; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot; C. Schwick; J. Fulcher; Jeroen Hegeman
Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013-2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns. Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared. This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution. In particular, post LS3 DAQ architectures are focused upon.
Journal of Physics: Conference Series | 2018
Jean-Marc Andre; Petia Petrova; D Gigi; Attila Racz; Samuel Johan Orn; A. Holzner; T. Reis; Christian Deldicque; Michail Vougioukas; Michael Lettrich; Cristian Contescu; E. Meschi; Ioannis Papakrivopoulos; F Meijers; M. Dobson; V. O'Dell; F. Glege; Dainius Simelevicius; Georgiana Lavinia Darlea; Christoph Paus; Z. Demiragli; H. Sakulin; D. Rabady; Jeroen Hegeman; Andrea Petrucci; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot
The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of datataking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.
15th Int. Conf. on Accelerator and Large Experimental Physics Control Systems (ICALEPCS'15), Melbourne, Australia, 17-23 October 2015 | 2015
F. Glege; Jean-Marc Andre; Anastasios Andronidis; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; Guillelmo Gomez-Ceballos; Jeroen Hegeman; Oliver Holme; A. Holzner; Mindaugas Janulis; Raul Jimenez Estupinan; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri
Remote monitoring and controls has always been an important aspect of physics detector controls since it was available. Due to the complexity of the systems, the 24/7 running requirements and limited human resources, remote access to perform interventions is essential. The amount of data to visualize, the required visualization types and cybersecurity standards demand a professional, complete solution. Using the example of the integration of the CMS detector controls system into our ORACLE WebCenter infrastructure, the mechanisms and tools available for integration with controls systems shall be discussed. Authentication has been delegated to WebCenter and authorization been shared between web server and control system. Session handling exists in either system and has to be matched. Concurrent access by multiple users has to be handled. The underlying JEE infrastructure is specialized in visualization and information sharing. On the other hand, the structure of a JEE system resembles a distributed controls system. Therefore an outlook shall be given on tasks which could be covered by the web servers rather than the controls system.