Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. Chapeland is active.

Publication


Featured researches published by S. Chapeland.


Journal of Grid Computing | 2004

Autonomic Management of Large Clusters and Their Integration into the Grid

Thomas Röblitz; Florian Schintke; Alexander Reinefeld; O. Barring; Maite Barroso Lopez; German Cancio; S. Chapeland; Karim Chouikh; Lionel Cons; Piotr Poznański; Philippe Defert; Jan Iven; Thorsten Kleinwort; B. Panzer-Steindel; Jaroslaw Polok; Catherine Rafflin; Alan Silverman; T.J. Smith; Jan Van Eldik; David Front; Massimo Biasotto; Cristina Aiftimiei; Enrico Ferro; Gaetano Maron; Andrea Chierici; Luca Dell’agnello; Marco Serra; M. Michelotto; Lord Hess; V. Lindenstruth

Abstract We present a framework for the co-ordinated, autonomic management of multiple clusters in a compute center and their integration into a Grid environment. Site autonomy and the automation of administrative tasks are prime aspects in this framework. The system behavior is continuously monitored in a steering cycle and appropriate actions are taken to resolve any problems. All presented components have been implemented in the course of the EU project DataGrid: The Lemon monitoring components, the FT fault-tolerance mechanism, the quattor system for software installation and configuration, the RMS job and resource management system, and the Gridification scheme that integrates clusters into the Grid.


Journal of Physics: Conference Series | 2010

The ALICE data quality monitoring

B. von Haller; F. Roukoutakis; S. Chapeland; V. Altini; F. Carena; V. Chibante Barroso; F. Costa; R. Divià; U. Fuchs; I. Makhlyueva; K Schossmaier; C. Soos; P. Vande Vyvre

ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final design, as well as the latest and coming features, of the ALICEs specific DQM software called AMORE (Automatic MonitoRing Environment). It describes the challenges we faced during its implementation, including the performances issues, and how we tested and handled them, in particular by using a scalable and robust publish-subscribe architecture.We also review the on-going and increasing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the sub-detectors. The related packaging and release procedure needed by such a distributed framework is also described. We finally overview the wide range of usages people make of this framework, and we review our own experience, before and during the LHC start-up, when monitoring the data quality on both the sub-detectors and the DAQ side in a real-world and challenging environment.


Journal of Physics: Conference Series | 2010

Online processing in the ALICE DAQ The detector algorithms

S. Chapeland; V. Altini; F. Carena; V. Chibante Barroso; F. Costa; R. Divià; U. Fuchs; I. Makhlyueva; F. Roukoutakis; K Schossmaier; C. Soos; P. Vande Vyvre; B. von Haller

ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Some specific calibration tasks are performed regularly for each of the 18 ALICE sub-detectors in order to achieve most accurate physics measurements. These procedures involve events analysis in a wide range of experimental conditions, implicating various trigger types, data throughputs, electronics settings, and algorithms, both during short sub-detector standalone runs and long global physics runs. A framework was designed to collect statistics and compute some of the calibration parameters directly online, using resources of the Data Acquisition System (DAQ), and benefiting from its inherent parallel architecture to process events. This system has been used at the experimental area for one year, and includes more than 30 calibration routines in production. This paper describes the framework architecture and the synchronization mechanisms involved at the level of the Experiment Control System (ECS) of ALICE. The software libraries interfacing detector algorithms (DA) to the online data flow, configuration database, experiment logbook, and offline system are reviewed. The test protocols followed to integrate and validate each sub-detector component are also discussed, including the automatic build system and validation procedures used to ensure a smooth deployment. The offline post-processing and archiving of the DA results is covered in a separate paper.


Journal of Instrumentation | 2015

DDL, the ALICE data transmission protocol and its evolution from 2 to 6 Gb/s

F. Carena; V. Chibante Barroso; F. Costa; S. Chapeland; C. Delort; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; T. Kiss; G. Simonetti; C. Soos; A. Telesca; P. Vande Vyvre; B. von Haller

ALICE (A Large Ion Collider Experiment) is the detector system at the LHC (Large Hadron Collider) that studies the behaviour of strongly interacting matter and the quark gluon plasma. The information sent by the sub-detectors composing ALICE are read out by DATE (Data Acquisition and Test Environment), the ALICE data acquisition software, using hundreds of multi-mode optical links called DDL (Detector Data Link). To cope with the higher luminosity of the LHC, the bandwidth of the DDL links will be upgraded in 2015. This paper will describe the evolution of the DDL protocol from 2 to 6 Gbit/s.


Journal of Physics: Conference Series | 2010

The ALICE online data storage system

R. Divià; U. Fuchs; I. Makhlyueva; P. Vande Vyvre; V. Altini; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; F. Roukoutakis; K Schossmaier; C. Soos; B. von Haller

The ALICE (A Large Ion Collider Experiment) Data Acquisition (DAQ) system has the unprecedented requirement to ensure a very high volume, sustained data stream between the ALICE Detector and the Permanent Data Storage (PDS) system which is used as main data repository for Event processing and Offline Computing. The key component to accomplish this task is the Transient Data Storage System (TDS), a set of data storage elements with its associated hardware and software components, which supports raw data collection, its conversion into a format suitable for subsequent high-level analysis, the storage of the result using highly parallelized architectures, its access via a cluster file system capable of creating high-speed partitions via its affinity feature, and its transfer to the final destination via dedicated data links. We describe the methods and the components used to validate, test, implement, operate, and monitor the ALICE Online Data Storage system and the way it has been used in the early days of commissioning and operation for the ALICE Detector. We will also introduce the future developments needed from next year, when the ALICE Data Acquisition System will shift its requirements from those associated to the test and commissioning phase to those imposed by long-duration data taking periods alternated by shorter validation and maintenance tasks which will be needed to adequately operate the ALICE Experiment.


ieee nuclear science symposium | 2007

The ALICE-LHC online Data Quality Monitoring framework

S. Chapeland; F. Roukoutakis

ALICE is one of the experiments installed at CERN Large Hadron Collider, dedicated to the study of heavy-ion collisions. The final ALICE data acquisition system has been installed and is being used for the testing and commissioning of detectors. Data Quality Monitoring (DQM) is an important aspect of the online procedures for a HEP experiment. In this presentation we overview the commissioning and the integration of ALICE’s AMORE (Automatic MOnitoRing Environment), a custom-written distributed application aimed at providing DQM services in a large, experiment-wide scale.


Archive | 2004

The ALICE Data-Acquisition Read-out Receiver card

F. Carena; P Van de Vyvre; J C Martin; S. Chapeland; R Divià; Alessandro Vascotto; C. Soos; K Schossmaier; T. Kiss; E. Dénes

The ALICE data-acquisition system will use more than 400 optical links, called Detector Data Links (DDLs) to transfer the data from the detector electronics directly into the PC memory through a PCI adapter: the DAQ Read-out Receiver Card (D-RORC). The D-RORC includes two DDL interfaces, which can either receive detector data, or copy and transfer them to the High-Level Trigger system. Using the 64-bit PCI interface IP core, the D-RORC offers more than 400 MB/s bandwidth. This document describes the hardware and firmware architecture, the co-operation with the software running on the PC, as well as the performance of the D-RORC.


Computer Physics Communications | 2001

The ALICE DAQ: current status and future challenges

M. Arregui; S. Chapeland; P. Csato; E. Dénes; R. Divià; B. Eged; P. Jovanovic; T. Kiss; V. Lindenstruth; Z. Meggyesi; I. Novak; F. Rademakers; D. Roehrich; G Rubin; David Tarjan; N. Toth; K Schossmaier; B. Skaali; C. Soos; R. Stock; J. Sulyan; P. Vande Vyvre; Alessandro Vascotto; O. Villalobos Baillie; B. Vissy

The ALICE data acquisition system has been designed to support an aggregate event-building bandwidth of up to 2.5 GByte/s and a storage capability of up to 1.25 GByte/s to mass storage. A general framework called the ALICE Data Acquisition Test Environment (DATE) system has been developed as a basis for prototyping the components of the DAQ. DATE supports a wide spectrum of configurations from simple systems to more complex systems with multiple detectors and multiple event builders. Prototypes of several key components of the ALICE DAQ have been developed and integrated with the DATE system such as the ALICE Detector Data Link, the online data monitoring from ROOT and the interface to the mass storage systems. Combined tests of several of these components are being pursued during the ALICE Data Challenges. The architecture of the ALICE DAQ system will be presented together with the current status of the different prototypes. The recent addition of a Transition Radiation Detector (TRD) to ALICE has required a revision of the requirements and the architecture of the DAQ. This will allow for a higher level of data selection. These new opportunities and implementation challenges will also be presented.


Journal of Physics: Conference Series | 2014

System performance monitoring of the ALICE Data Acquisition System with Zabbix

A. Telesca; F. Carena; S. Chapeland; V. Chibante Barroso; F. Costa; E. Dénes; R. Divià; U. Fuchs; A. Grigore; C. Ionita; C. Delort; G. Simonetti; C. Soos; P. Vande Vyvre; B. von Haller

ALICE (A Large Ion Collider Experiment) is a heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE Data-AcQuisition (DAQ) system handles the data flow from the sub-detector electronics to the permanent data storage in the CERN computing center. The DAQ farm consists of about 1000 devices of many different types ranging from direct accessible machines to storage arrays and custom optical links. The system performance monitoring tool used during the LHC run 1 will be replaced by a new tool for run 2. This paper shows the results of an evaluation that has been conducted on six publicly available monitoring tools. The evaluation has been carried out by taking into account selection criteria such as scalability, flexibility, reliability as well as data collection methods and display. All the tools have been prototyped and evaluated according to those criteria. We will describe the considerations that have led to the selection of the Zabbix monitoring tool for the DAQ farm. The results of the tests conducted in the ALICE DAQ laboratory will be presented. In addition, the deployment of the software on the DAQ machines in terms of metrics collected and data collection methods will be described. We will illustrate how remote nodes are monitored with Zabbix by using SNMP-based agents and how DAQ specific metrics are retrieved and displayed. We will also show how the monitoring information is accessed and made available via the graphical user interface and how Zabbix communicates with the other DAQ online systems for notification and reporting.


Journal of Physics: Conference Series | 2008

Commissioning of the ALICE data acquisition system

T Anticic; Vasco Miguel Chibante Barroso; F. Carena; S. Chapeland; O Cobanoglu; E Dénes; R. Divià; U. Fuchs; T Kiss; I. Makhlyueva; F Ozok; F. Roukoutakis; K Schossmaier; C. Soos; Pierre Vande Vyvre; S Vergara

ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A flexible, large bandwidth Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time foreseen per year for heavy ions and to accommodate very different requirements originated from the 18 sub-detectors. The Data Acquisition and Test Environment (DATE) is the software framework handling the data from the detector electronics up to the mass storage. This paper reviews the DAQ software and hardware architecture, including the latest features of the final design, such as the handling of the numerous calibration procedures in a common framework. We also discuss the large scale tests conducted on the real hardware to assess the standalone DAQ performances, its interfaces with the other online systems and the extensive commissioning performed in order to be ready for cosmics data taking scheduled to start in November 2007. The test protocols followed to integrate and validate each sub-detector with DAQ and Trigger hardware synchronized by the Experiment Control System are described. Finally, we give an overview of the experiment logbook, and some operational aspects of the deployment of our computing facilities. The implementation of a Transient Data Storage able to cope with the 1.25 GB/s recorded by the event-building machines and the data quality monitoring framework are covered in separate papers.

Collaboration


Dive into the S. Chapeland's collaboration.

Researchain Logo
Decentralizing Knowledge