V. Chibante Barroso
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by V. Chibante Barroso.
Journal of Physics: Conference Series | 2010
B. von Haller; F. Roukoutakis; S. Chapeland; V. Altini; F. Carena; V. Chibante Barroso; F. Costa; R. Divià; U. Fuchs; I. Makhlyueva; K Schossmaier; C. Soos; P. Vande Vyvre
ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final design, as well as the latest and coming features, of the ALICEs specific DQM software called AMORE (Automatic MonitoRing Environment). It describes the challenges we faced during its implementation, including the performances issues, and how we tested and handled them, in particular by using a scalable and robust publish-subscribe architecture.We also review the on-going and increasing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the sub-detectors. The related packaging and release procedure needed by such a distributed framework is also described. We finally overview the wide range of usages people make of this framework, and we review our own experience, before and during the LHC start-up, when monitoring the data quality on both the sub-detectors and the DAQ side in a real-world and challenging environment.
Journal of Physics: Conference Series | 2010
S. Chapeland; V. Altini; F. Carena; V. Chibante Barroso; F. Costa; R. Divià; U. Fuchs; I. Makhlyueva; F. Roukoutakis; K Schossmaier; C. Soos; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide range of experimental conditions, implicating various trigger types, data throughputs, electronics settings, and algorithms, both during short sub-detector standalone runs and long global physics runs. A framework was designed to collect statistics and compute some of the calibration parameters directly online, using resources of the Data Acquisition System (DAQ), and benefiting from its inherent parallel architecture to process events. This system has been used at the experimental area for one year, and includes more than 30 calibration routines in production. This paper describes the framework architecture and the synchronization mechanisms involved at the level of the Experiment Control System (ECS) of ALICE. The software libraries interfacing detector algorithms (DA) to the online data flow, configuration database, experiment logbook, and offline system are reviewed. The test protocols followed to integrate and validate each sub-detector component are also discussed, including the automatic build system and validation procedures used to ensure a smooth deployment. The offline post-processing and archiving of the DA results is covered in a separate paper.
Journal of Instrumentation | 2015
F. Carena; V. Chibante Barroso; F. Costa; S. Chapeland; C. Delort; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; T. Kiss; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the detector system at the LHC (Large Hadron Collider) that studies the behaviour of strongly interacting matter and the quark gluon plasma. The information sent by the sub-detectors composing ALICE are read out by DATE (Data Acquisition and Test Environment), the ALICE data acquisition software, using hundreds of multi-mode optical links called DDL (Detector Data Link). To cope with the higher luminosity of the LHC, the bandwidth of the DDL links will be upgraded in 2015. This paper will describe the evolution of the DDL protocol from 2 to 6 Gbit/s.
Journal of Physics: Conference Series | 2010
R. Divià; U. Fuchs; I. Makhlyueva; P. Vande Vyvre; V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; F. Roukoutakis; K Schossmaier; C. Soos; B. von Haller
The ALICE (A Large Ion Collider Experiment) Data Acquisition (DAQ) system has the unprecedented requirement to ensure a very high volume, sustained data stream between the ALICE Detector and the Permanent Data Storage (PDS) system which is used as main data repository for Event processing and Offline Computing. The key component to accomplish this task is the Transient Data Storage System (TDS), a set of data storage elements with its associated hardware and software components, which supports raw data collection, its conversion into a format suitable for subsequent high-level analysis, the storage of the result using highly parallelized architectures, its access via a cluster file system capable of creating high-speed partitions via its affinity feature, and its transfer to the final destination via dedicated data links. We describe the methods and the components used to validate, test, implement, operate, and monitor the ALICE Online Data Storage system and the way it has been used in the early days of commissioning and operation for the ALICE Detector. We will also introduce the future developments needed from next year, when the ALICE Data Acquisition System will shift its requirements from those associated to the test and commissioning phase to those imposed by long-duration data taking periods alternated by shorter validation and maintenance tasks which will be needed to adequately operate the ALICE Experiment.
Journal of Physics: Conference Series | 2014
A. Telesca; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; C. Delort; G. Simonetti; C. Soos; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE Data-AcQuisition (DAQ) system handles the data flow from the sub-detector electronics to the permanent data storage in the CERN computing center. The DAQ farm consists of about 1000 devices of many different types ranging from direct accessible machines to storage arrays and custom optical links. The system performance monitoring tool used during the LHC run 1 will be replaced by a new tool for run 2. This paper shows the results of an evaluation that has been conducted on six publicly available monitoring tools. The evaluation has been carried out by taking into account selection criteria such as scalability, flexibility, reliability as well as data collection methods and display. All the tools have been prototyped and evaluated according to those criteria. We will describe the considerations that have led to the selection of the Zabbix monitoring tool for the DAQ farm. The results of the tests conducted in the ALICE DAQ laboratory will be presented. In addition, the deployment of the software on the DAQ machines in terms of metrics collected and data collection methods will be described. We will illustrate how remote nodes are monitored with Zabbix by using SNMP-based agents and how DAQ specific metrics are retrieved and displayed. We will also show how the monitoring information is accessed and made available via the graphical user interface and how Zabbix communicates with the other DAQ online systems for notification and reporting.
ieee-npss real-time conference | 2009
A. Telesca; P. Vande Vyvre; R. Divià; V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; M. Frauman; Irina Makhlyueva; O. Rademakers-Di Rosa; D. Rodriguez Navarro; F. Roukoutakis; K Schossmaier; C. Soos; B. von Haller
ALICE (A Large Ion Collider Experiment) is one of the four experiments at the CERN LHC (Large Handron Collider) and is dedicated to the study of nucleus-nucleus interactions. Its data acquisition system has to record the steady stream of very large events resulting from central collisions, with an ability to select and record rare cross section processes. These requirements result in an aggregate event building bandwidth of up to 2,5 GBytes/s and a storage capability of up to 1,25 GBytes/s, giving a total of more than 1 PBytes of data every year. This makes the performance of the mass storage devices a dominant factor for the overall system behavior and throughput. In this paper, we present an analysis of the performance of the storage system used in the ALICE experiment by studying the impact of different configuration parameters on the system throughput. The aim of this analysis is to determine the storage configuration which gives the best system performance. In particular, we show the influence of file and block size on the writing and reading rates and we present the relative performance of a clustered file system (StorNext®) and a regular journaled file system (Xfs) based on disk arrays in a Fiber Channel storage area network (SAN) . We will also present a comparative analysis between different parametrizations of Redundant Arrays of Inexpensive Disks (RAID) varying the parity level and the numbers of disks in each array. At the end, we will conclude with the aggregate performance when concurrent writing and reading streams are sharing the system.
Journal of Physics: Conference Series | 2014
Ananya; A Alarcon Do Passo Suaide; C. Alves Garcia Prado; T. Alt; L. Aphecetche; N Agrawal; A Avasthi; M. Bach; R. Bala; G. G. Barnaföldi; A. Bhasin; J. Belikov; F. Bellini; L. Betev; T. Breitner; P. Buncic; F. Carena; S. Chapeland; V. Chibante Barroso; F Cliff; F. Costa; L Cunqueiro Mendez; Sadhana Dash; C Delort; E. Dénes; R. Divià; B. Doenigus; H. Engel; D. Eschweiler; U. Fuchs
ALICE (A Large Ion Collider Experiment) is a detector dedicated to the studies with heavy ion collisions exploring the physics of strongly interacting nuclear matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shutdown of the LHC, the ALICE Experiment will be upgraded to make high precision measurements of rare probes at low pT, which cannot be selected with a trigger, and therefore require a very large sample of events recorded on tape. The online computing system will be completely redesigned to address the major challenge of sampling the full 50 kHz Pb-Pb interaction rate increasing the present limit by a factor of 100. This upgrade will also include the continuous un-triggered read-out of two detectors: ITS (Inner Tracking System) and TPC (Time Projection Chamber)) producing a sustained throughput of 1 TB/s. This unprecedented data rate will be reduced by adopting an entirely new strategy where calibration and reconstruction are performed online, and only the reconstruction results are stored while the raw data are discarded. This system, already demonstrated in production on the TPC data since 2011, will be optimized for the online usage of reconstruction algorithms. This implies much tighter coupling between online and offline computing systems. An R&D program has been set up to meet this huge challenge. The object of this paper is to present this program and its first results.
Journal of Physics: Conference Series | 2012
F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; T. Kiss; W Rauch; G Rubin; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
In November 2009, after 15 years of design and installation, the ALICE experiment started to detect and record the first collisions produced by the LHC. It has been collecting hundreds of millions of events ever since with both proton and heavy ion collisions. The future scientific programme of ALICE has been refined following the first year of data taking. The physics targeted beyond 2018 will be the study of rare signals. Several detectors will be upgraded, modified, or replaced to prepare ALICE for future physics challenges. An upgrade of the triggering and readout systems is also required to accommodate the needs of the upgraded ALICE and to better select the data of the rare physics channels. The ALICE upgrade will have major implications in the detector electronics and controls, data acquisition, event triggering and offline computing and storage systems. Moreover, the experience accumulated during more than two years of operation has also lead to new requirements for the control software. We will review all these new needs and the current R&D activities to address them. Several papers of the same conference present in more details some elements of the ALICE online system.
Journal of Physics: Conference Series | 2011
M Boccioli; F. Carena; S. Chapeland; V. Chibante Barroso; M Lechman; A Jusko; O Pinazza
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). It includes 18 different sub-detectors and 5 online systems, each one made of many different components and developed by different teams inside the collaboration. The operation of a large experiment over several years to collect billions of events acquired in well defined conditions requires predictability and repeatability of the experiment configuration. The logistics of the operation is also a major issue and it is mandatory to reduce the size of the shift crew needed to operate the experiment. Appropriate software tools are therefore needed to automate daily operations. This ensures minimizing human errors and maximizing the data taking time. The ALICE Configuration Tool (ACT) is ALICE first step to achieve a high level of automation, implementing automatic configuration and calibration of the sub-detectors and online systems. This presentation describes the goals and architecture of the ACT, the web-based Human Interface and the commissioning performed before the start of the collisions. It also reports on the first experiences with real use in daily operations, and finally it presents the road-map for future developments.
ieee npss real time conference | 2016
V. Chibante Barroso; U. Fuchs; A. Wegrzynek
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). ALICE has been successfully collecting physics data of Run 2 since spring 2015. In parallel, preparations for a major upgrade, called O2 (Online-Offline) and scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. To meet these requirements, the software and hardware layers of the new system need to be fully evaluated. In order to achieve a high performance to cost ratio three networking technologies (Ethernet, InfiniBand and Omni-Path) were benchmarked on Intel and IBM platforms. The core of the new transport layer will be based on a message queue library that supports push-pull and request-reply communication patterns and multipart messages. ZeroMQ and nanomsg are being evaluated as candidates and were tested in detail over the selected network technologies. This paper describes the benchmark programs and setups that were used during the tests, the significance of tuned kernel parameters, the configuration of network driver and the tuning of multi-core, multi-CPU, and NUMA (Non-Uniform Memory Access) architecture. It presents, compares and comments the final results. Eventually, it indicates the most efficient network technology and message queue library pair and provides an evaluation of the needed CPU and memory resources to handle foreseen traffic.