A. Telesca
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. Telesca.
Journal of Instrumentation | 2015
F. Carena; V. Chibante Barroso; F. Costa; S. Chapeland; C. Delort; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; T. Kiss; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the detector system at the LHC (Large Hadron Collider) that studies the behaviour of strongly interacting matter and the quark gluon plasma. The information sent by the sub-detectors composing ALICE are read out by DATE (Data Acquisition and Test Environment), the ALICE data acquisition software, using hundreds of multi-mode optical links called DDL (Detector Data Link). To cope with the higher luminosity of the LHC, the bandwidth of the DDL links will be upgraded in 2015. This paper will describe the evolution of the DDL protocol from 2 to 6 Gbit/s.
Journal of Physics: Conference Series | 2014
A. Telesca; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; C. Delort; G. Simonetti; C. Soos; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE Data-AcQuisition (DAQ) system handles the data flow from the sub-detector electronics to the permanent data storage in the CERN computing center. The DAQ farm consists of about 1000 devices of many different types ranging from direct accessible machines to storage arrays and custom optical links. The system performance monitoring tool used during the LHC run 1 will be replaced by a new tool for run 2. This paper shows the results of an evaluation that has been conducted on six publicly available monitoring tools. The evaluation has been carried out by taking into account selection criteria such as scalability, flexibility, reliability as well as data collection methods and display. All the tools have been prototyped and evaluated according to those criteria. We will describe the considerations that have led to the selection of the Zabbix monitoring tool for the DAQ farm. The results of the tests conducted in the ALICE DAQ laboratory will be presented. In addition, the deployment of the software on the DAQ machines in terms of metrics collected and data collection methods will be described. We will illustrate how remote nodes are monitored with Zabbix by using SNMP-based agents and how DAQ specific metrics are retrieved and displayed. We will also show how the monitoring information is accessed and made available via the graphical user interface and how Zabbix communicates with the other DAQ online systems for notification and reporting.
ieee-npss real-time conference | 2009
A. Telesca; P. Vande Vyvre; R. Divià; V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; M. Frauman; Irina Makhlyueva; O. Rademakers-Di Rosa; D. Rodriguez Navarro; F. Roukoutakis; K Schossmaier; C. Soos; B. von Haller
ALICE (A Large Ion Collider Experiment) is one of the four experiments at the CERN LHC (Large Handron Collider) and is dedicated to the study of nucleus-nucleus interactions. Its data acquisition system has to record the steady stream of very large events resulting from central collisions, with an ability to select and record rare cross section processes. These requirements result in an aggregate event building bandwidth of up to 2,5 GBytes/s and a storage capability of up to 1,25 GBytes/s, giving a total of more than 1 PBytes of data every year. This makes the performance of the mass storage devices a dominant factor for the overall system behavior and throughput. In this paper, we present an analysis of the performance of the storage system used in the ALICE experiment by studying the impact of different configuration parameters on the system throughput. The aim of this analysis is to determine the storage configuration which gives the best system performance. In particular, we show the influence of file and block size on the writing and reading rates and we present the relative performance of a clustered file system (StorNext®) and a regular journaled file system (Xfs) based on disk arrays in a Fiber Channel storage area network (SAN) . We will also present a comparative analysis between different parametrizations of Redundant Arrays of Inexpensive Disks (RAID) varying the parity level and the numbers of disks in each array. At the end, we will conclude with the aggregate performance when concurrent writing and reading streams are sharing the system.
ieee npss real time conference | 2016
F. Costa; Vasco Miguel Chibante Barroso; C. Soos; R. Divià; Adam Wegrzynek; T. Kiss; H. Engel; Giuseppe Simonetti; J. Niedziela; Barthelemy Von Haller; A. Telesca; F. Carena; Pierre Vande Vyvre; U. Fuchs; S. Chapeland
ALICE (A Large Ion Collider Experiment) is the detector system at the LHC (Large Hadron Collider) optimized for the study of heavy-ion collisions at interaction rates up to 50 kHz and data rates beyond 1 TB/s. Its main aim is to study the behavior of strongly interacting matter and the quark gluon plasma. ALICE is preparing a major upgrade and starting from 2021, it will collect data with several upgraded sub-detectors (TPC, ITS, Muon Tracker and Chamber, TRD and TOF). The ALICE DAQ read-out system will be upgraded as well, with a new read-out link called GBT (GigaBit Transceiver) with a max. speed of 4.48 Gb/s and a new PCIe gen.3 ×16, interface card called CRU (Common Read-out Unit). Several test beams have been scheduled for the test and characterization of the prototypes or parts of new detectors. The test beams usually last for a short period of one or two weeks and it is therefore very important to use a stable read-out system to optimize the data taking period and be able to collect as much statistics as possible. The ALICE DAQ and CRU teams proposed a data acquisition chain based on the current ALICE DAQ framework in order to provide a reliable read-out system. The new GBT link, transferring data from the front-end electronics, will be directly connected to the C-RORC, the current read-out PCIe card used in the ALICE experiment. The ALICE DATE software is a stable solution in production since more than 10 years. Moreover, most of the ALICE detector developers are already familiar with the software and its different analysis tools. This setup will allow the detector team to focus on the test of their detectors and electronics, without worrying about the stability of the data acquisition system. An additional development has been carried on with a C-RORC-based Detector Data Generator (DDG). The DDG has been designed to be a realistic data source for the GBT. It generates simulated events in a continuous mode and sends them to the DAQ system through the optical fibers, at a maximum of 4.48 Gb/s per GBT link. This hardware tool will be used to test and verify the correct behavior of the new DAQ readout card, CRU, once it will become available to the developers. Indeed the CRU team will not have a real detector electronics to perform communication and performance tests, so it is vital during the test and commissioning phase to have a data generator able to simulate the FEE behavior. This contribution will describe the firmware and software features of the proposed read-out system and it will explain how the read-out chain will be used in the future tests and how it can help the development of the new ALICE DAQ software.
Journal of Physics: Conference Series | 2014
Ananya; A Alarcon Do Passo Suaide; C. Alves Garcia Prado; T. Alt; L. Aphecetche; N Agrawal; A Avasthi; M. Bach; R. Bala; G. G. Barnaföldi; A. Bhasin; J. Belikov; F. Bellini; L. Betev; T. Breitner; P. Buncic; F. Carena; S. Chapeland; V. Chibante Barroso; F Cliff; F. Costa; L Cunqueiro Mendez; Sadhana Dash; C Delort; E. Dénes; R. Divià; B. Doenigus; H. Engel; D. Eschweiler; U. Fuchs
ALICE (A Large Ion Collider Experiment) is a detector dedicated to the studies with heavy ion collisions exploring the physics of strongly interacting nuclear matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shutdown of the LHC, the ALICE Experiment will be upgraded to make high precision measurements of rare probes at low pT, which cannot be selected with a trigger, and therefore require a very large sample of events recorded on tape. The online computing system will be completely redesigned to address the major challenge of sampling the full 50 kHz Pb-Pb interaction rate increasing the present limit by a factor of 100. This upgrade will also include the continuous un-triggered read-out of two detectors: ITS (Inner Tracking System) and TPC (Time Projection Chamber)) producing a sustained throughput of 1 TB/s. This unprecedented data rate will be reduced by adopting an entirely new strategy where calibration and reconstruction are performed online, and only the reconstruction results are stored while the raw data are discarded. This system, already demonstrated in production on the TPC data since 2011, will be optimized for the online usage of reconstruction algorithms. This implies much tighter coupling between online and offline computing systems. An R&D program has been set up to meet this huge challenge. The object of this paper is to present this program and its first results.
Journal of Physics: Conference Series | 2012
F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; T. Kiss; W Rauch; G Rubin; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
In November 2009, after 15 years of design and installation, the ALICE experiment started to detect and record the first collisions produced by the LHC. It has been collecting hundreds of millions of events ever since with both proton and heavy ion collisions. The future scientific programme of ALICE has been refined following the first year of data taking. The physics targeted beyond 2018 will be the study of rare signals. Several detectors will be upgraded, modified, or replaced to prepare ALICE for future physics challenges. An upgrade of the triggering and readout systems is also required to accommodate the needs of the upgraded ALICE and to better select the data of the rare physics channels. The ALICE upgrade will have major implications in the detector electronics and controls, data acquisition, event triggering and offline computing and storage systems. Moreover, the experience accumulated during more than two years of operation has also lead to new requirements for the control software. We will review all these new needs and the current R&D activities to address them. Several papers of the same conference present in more details some elements of the ALICE online system.
Journal of Physics: Conference Series | 2012
S. Chapeland; F. Carena; Vasco Miguel Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; G. Simonetti; C. Soos; A. Telesca; Pierre Vande Vyvre; Barthelemy Von Haller
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.
Journal of Physics: Conference Series | 2011
B. von Haller; A. Telesca; S. Chapeland; F. Carena; V. Chibante Barroso; F. Costa; E. Denes; R. Divià; U. Fuchs; G. Simonetti; C. Soos; P. Vande Vyvre
ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final design, as well as the latest and coming features, of the ALICEs specific DQM software called AMORE (Automatic MonitoRing Environment). It describes the challenges we faced during its implementation, including the performances issues, and how we tested and handled them, in particular by using a scalable and robust publish-subscribe architecture.We also review the on-going and increasing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the sub-detectors. The related packaging and release procedure needed by such a distributed framework is also described. We finally overview the wide range of usages people make of this framework, and we review our own experience, before and during the LHC start-up, when monitoring the data quality on both the sub-detectors and the DAQ side in a real-world and challenging environment.
Journal of Physics: Conference Series | 2011
F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the Quark-Gluon Plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data-Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ions and to accommodate very different requirements originating from the 18 sub-detectors. After several months of data taking with beam, lots of experience has been accumulated and some important developments have been initiated in order to evolve towards a more automated and reliable experiment. We will present the experience accumulated so far and the new developments. Several upgrades of existing ALICE detectors or addition of new ones have also been proposed with a significant impact on the DAQ. We will review these proposals, their implication for the DAQ and the way they will be addressed.
ieee-npss real-time conference | 2009
V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; R. Divià; M. Frauman; U. Fuchs; Irina Makhlyueva; O. Rademakers; D. Rodriguez Navarro; F. Roukoutakis; K Schossmaier; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The ALICE Data Acquisition system (DAQ) is made of a large number of distributed hardware and software components, which rely on several online databases: the configuration database, describing the counting room machines, some detector-specific electronics settings and the DAQ and Experiment Control System runtime parameters; the log database, centrally collecting reports from running processes; the experiment logbook, tracking the run statistics filled automatically and the operator entries; the online archive of constantly updated data quality monitoring reports; the file indexing services, including the status of transient files for permanent storage and the calibration results for offline export; the user guides (Wiki); test databases to check the interfaces with external components; reference data sets used to restore known configurations. With 35 GB of online data hosted on a MySQL server and organized in more than 500 relational tables for a total of 40 million rows, this information is populated and accessible through various frontends, including C library for efficient repetitive access, Tcl/TK GUIs for configuration editors and log browser, HTML/PHP pages for the logbook, and command line tools for scripting and expert debugging. Exhaustive hardware benchmarks have been conducted to select the appropriate database server architecture. Secure access from private and general networks was implemented. Ad-hoc monitoring and backup mechanisms have been designed and deployed. We discuss the implementation of these complex databases and how the inhomogeneous requirements have been addressed. We also review the performance analysis outcome after more than one year in production and show results of data mining from this central information source.