F. Carena
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by F. Carena.
Nuclear Physics | 1980
D Allasia; C. Angelini; P Bagnaia; Giustina Baroni; J H Bartley; G. Bertrand-Coremans; V. Bisi; A C Breslin; E.H.S. Burhop; F. Carena; R. Casali; Guido Ciapetti; M Conversi; D.H Davis; S. Di Liberto; R Fantechi; M. L. Ferrer; C Franzinetti; D. Gamba; L Godfrey; D Keane; E. Lamanna; A Marzari; F. Marzano; A Montwill; A. Nappi; C. Palazzi-Cerrina; R Pazzi; S. Petrera; G. Pierazzini
Abstract The decay of charmed particles produced by high-energy neutrinos has been studied by an experiment using simultaneously emulsion, bubble chamber and counter techniques. Eight charmed particle candidates, 5 positively charged and 3 neutral, have been found in the emulsion, where their production and decay have been directly observed. One of these events is identified as a Λ c + baryon of mass 2.26±0.02 GeV/ c 2 which undergoes the decay Λ c + →pK − π + after a proper time of (7.3±0.1) · 10 −13 s. A statistical analysis of the other observed decays leads to the mean-life values τ + =(2.5 −1.1 +2.2 ) · 10 −13 s, τ 0 =(0.53 −0.25 +0.57 ) · 10 −13 s, for the sample of charged particles enriched by a similar event found in a previous experiment, and for the sample of 3 neutral particles, respectively. The former value is only slightly affected by including in the sample the Λ c + event or excluding that of the previous experiment.
Journal of Physics: Conference Series | 2010
B. von Haller; F. Roukoutakis; S. Chapeland; V. Altini; F. Carena; V. Chibante Barroso; F. Costa; R. Divià; U. Fuchs; I. Makhlyueva; K Schossmaier; C. Soos; P. Vande Vyvre
ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final design, as well as the latest and coming features, of the ALICEs specific DQM software called AMORE (Automatic MonitoRing Environment). It describes the challenges we faced during its implementation, including the performances issues, and how we tested and handled them, in particular by using a scalable and robust publish-subscribe architecture.We also review the on-going and increasing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the sub-detectors. The related packaging and release procedure needed by such a distributed framework is also described. We finally overview the wide range of usages people make of this framework, and we review our own experience, before and during the LHC start-up, when monitoring the data quality on both the sub-detectors and the DAQ side in a real-world and challenging environment.
Journal of Physics: Conference Series | 2010
S. Chapeland; V. Altini; F. Carena; V. Chibante Barroso; F. Costa; R. Divià; U. Fuchs; I. Makhlyueva; F. Roukoutakis; K Schossmaier; C. Soos; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide range of experimental conditions, implicating various trigger types, data throughputs, electronics settings, and algorithms, both during short sub-detector standalone runs and long global physics runs. A framework was designed to collect statistics and compute some of the calibration parameters directly online, using resources of the Data Acquisition System (DAQ), and benefiting from its inherent parallel architecture to process events. This system has been used at the experimental area for one year, and includes more than 30 calibration routines in production. This paper describes the framework architecture and the synchronization mechanisms involved at the level of the Experiment Control System (ECS) of ALICE. The software libraries interfacing detector algorithms (DA) to the online data flow, configuration database, experiment logbook, and offline system are reviewed. The test protocols followed to integrate and validate each sub-detector component are also discussed, including the automatic build system and validation procedures used to ensure a smooth deployment. The offline post-processing and archiving of the DA results is covered in a separate paper.
Journal of Instrumentation | 2015
F. Carena; V. Chibante Barroso; F. Costa; S. Chapeland; C. Delort; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; T. Kiss; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is the detector system at the LHC (Large Hadron Collider) that studies the behaviour of strongly interacting matter and the quark gluon plasma. The information sent by the sub-detectors composing ALICE are read out by DATE (Data Acquisition and Test Environment), the ALICE data acquisition software, using hundreds of multi-mode optical links called DDL (Detector Data Link). To cope with the higher luminosity of the LHC, the bandwidth of the DDL links will be upgraded in 2015. This paper will describe the evolution of the DDL protocol from 2 to 6 Gbit/s.
Journal of Physics: Conference Series | 2010
R. Divià; U. Fuchs; I. Makhlyueva; P. Vande Vyvre; V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; F. Roukoutakis; K Schossmaier; C. Soos; B. von Haller
The ALICE (A Large Ion Collider Experiment) Data Acquisition (DAQ) system has the unprecedented requirement to ensure a very high volume, sustained data stream between the ALICE Detector and the Permanent Data Storage (PDS) system which is used as main data repository for Event processing and Offline Computing. The key component to accomplish this task is the Transient Data Storage System (TDS), a set of data storage elements with its associated hardware and software components, which supports raw data collection, its conversion into a format suitable for subsequent high-level analysis, the storage of the result using highly parallelized architectures, its access via a cluster file system capable of creating high-speed partitions via its affinity feature, and its transfer to the final destination via dedicated data links. We describe the methods and the components used to validate, test, implement, operate, and monitor the ALICE Online Data Storage system and the way it has been used in the early days of commissioning and operation for the ALICE Detector. We will also introduce the future developments needed from next year, when the ALICE Data Acquisition System will shift its requirements from those associated to the test and commissioning phase to those imposed by long-duration data taking periods alternated by shorter validation and maintenance tasks which will be needed to adequately operate the ALICE Experiment.
Archive | 2001
J. C. Marin; J. Sulyan; C. Soos; P Van de Vyvre; Peter Csato; R Divià; Alessandro Vascotto; F. Carena; K Schossmaier; T. Kiss; E. Dénes
The PCI-based Readout Receiver Card (PRORC) is the primary interface between the detector data link (an optical device called DDL) and the front-end computers (PC running Linux) of the ALICE data acquisition system. This document describes the prototype architecture of the PRORC hardware and firmware, and of the PC software. The board contains a PCI interface circuit and an FPGA. The firmware in the FPGA is responsible for all the concurrent activities of the board, such as reading the DDL and controlling the DMA. The co-operation between the firmware and the PC software allows autonomous data transfer into the PC memory with little CPU assistance. The system achieves a sustained transfer rate of 100 MB/s, meeting the design specifications and the ALICE requirements.
Archive | 2004
F. Carena; P Van de Vyvre; J C Martin; S. Chapeland; R Divià; Alessandro Vascotto; C. Soos; K Schossmaier; T. Kiss; E. Dénes
The ALICE data-acquisition system will use more than 400 optical links, called Detector Data Links (DDLs) to transfer the data from the detector electronics directly into the PC memory through a PCI adapter: the DAQ Read-out Receiver Card (D-RORC). The D-RORC includes two DDL interfaces, which can either receive detector data, or copy and transfer them to the High-Level Trigger system. Using the 64-bit PCI interface IP core, the D-RORC offers more than 400 MB/s bandwidth. This document describes the hardware and firmware architecture, the co-operation with the software running on the PC, as well as the performance of the D-RORC.
Journal of Physics: Conference Series | 2014
A. Telesca; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; C. Delort; G. Simonetti; C. Soos; P. Vande Vyvre; B. von Haller
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE Data-AcQuisition (DAQ) system handles the data flow from the sub-detector electronics to the permanent data storage in the CERN computing center. The DAQ farm consists of about 1000 devices of many different types ranging from direct accessible machines to storage arrays and custom optical links. The system performance monitoring tool used during the LHC run 1 will be replaced by a new tool for run 2. This paper shows the results of an evaluation that has been conducted on six publicly available monitoring tools. The evaluation has been carried out by taking into account selection criteria such as scalability, flexibility, reliability as well as data collection methods and display. All the tools have been prototyped and evaluated according to those criteria. We will describe the considerations that have led to the selection of the Zabbix monitoring tool for the DAQ farm. The results of the tests conducted in the ALICE DAQ laboratory will be presented. In addition, the deployment of the software on the DAQ machines in terms of metrics collected and data collection methods will be described. We will illustrate how remote nodes are monitored with Zabbix by using SNMP-based agents and how DAQ specific metrics are retrieved and displayed. We will also show how the monitoring information is accessed and made available via the graphical user interface and how Zabbix communicates with the other DAQ online systems for notification and reporting.
Journal of Physics: Conference Series | 2008
T Anticic; Vasco Miguel Chibante Barroso; F. Carena; S. Chapeland; O Cobanoglu; E Dénes; R. Divià; U. Fuchs; T Kiss; I. Makhlyueva; F Ozok; F. Roukoutakis; K Schossmaier; C. Soos; Pierre Vande Vyvre; S Vergara
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A flexible, large bandwidth Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time foreseen per year for heavy ions and to accommodate very different requirements originated from the 18 sub-detectors. The Data Acquisition and Test Environment (DATE) is the software framework handling the data from the detector electronics up to the mass storage. This paper reviews the DAQ software and hardware architecture, including the latest features of the final design, such as the handling of the numerous calibration procedures in a common framework. We also discuss the large scale tests conducted on the real hardware to assess the standalone DAQ performances, its interfaces with the other online systems and the extensive commissioning performed in order to be ready for cosmics data taking scheduled to start in November 2007. The test protocols followed to integrate and validate each sub-detector with DAQ and Trigger hardware synchronized by the Experiment Control System are described. Finally, we give an overview of the experiment logbook, and some operational aspects of the deployment of our computing facilities. The implementation of a Transient Data Storage able to cope with the 1.25 GB/s recorded by the event-building machines and the data quality monitoring framework are covered in separate papers.
ieee-npss real-time conference | 2009
A. Telesca; P. Vande Vyvre; R. Divià; V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; M. Frauman; Irina Makhlyueva; O. Rademakers-Di Rosa; D. Rodriguez Navarro; F. Roukoutakis; K Schossmaier; C. Soos; B. von Haller
ALICE (A Large Ion Collider Experiment) is one of the four experiments at the CERN LHC (Large Handron Collider) and is dedicated to the study of nucleus-nucleus interactions. Its data acquisition system has to record the steady stream of very large events resulting from central collisions, with an ability to select and record rare cross section processes. These requirements result in an aggregate event building bandwidth of up to 2,5 GBytes/s and a storage capability of up to 1,25 GBytes/s, giving a total of more than 1 PBytes of data every year. This makes the performance of the mass storage devices a dominant factor for the overall system behavior and throughput. In this paper, we present an analysis of the performance of the storage system used in the ALICE experiment by studying the impact of different configuration parameters on the system throughput. The aim of this analysis is to determine the storage configuration which gives the best system performance. In particular, we show the influence of file and block size on the writing and reading rates and we present the relative performance of a clustered file system (StorNext®) and a regular journaled file system (Xfs) based on disk arrays in a Fiber Channel storage area network (SAN) . We will also present a comparative analysis between different parametrizations of Redundant Arrays of Inexpensive Disks (RAID) varying the parity level and the numbers of disks in each array. At the end, we will conclude with the aggregate performance when concurrent writing and reading streams are sharing the system.