Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean-Christophe Garnier.
ieee-npss real-time conference | 2007
Juan Manuel Caicedo Carvajal; Jean-Christophe Garnier; N. Neufeld; R. Schwemmer
The LHCb experiment uses a single, high performance storage system to serve all kinds of storage needs: home directories, shared areas, raw data storage and buffer storage for event reconstruction. All these applications are concurrent and require careful optimisation. In particular for accessing the raw data in read and write mode, a custom light weight non- POSIX compliant file system has been developed. File serving is achieved by running several redundant file servers in an active active configuration with high availability capabilities and good performance. In this paper we describe the design and current architecture of this storage system. We discuss implementation issues and problems we had to overcome during the hitherto 18 months run-in period. Based on our experience we will also discuss the relative advantages and disadvantages of such a system over a system composed of several smaller storage systems. We also present performance measurements.
ieee-npss real-time conference | 2009
F. Alessio; C. Barandela; L. Brarda; O. Callot; M. Frank; Jean-Christophe Garnier; Domenico Galli; C. Gaspar; Z Guzik; E. van Herwijnen; R. Jacobsson; Beat Jost; A Mazurov; G Moine; N. Neufeld; M. Pepe-Altarelli; A. Sambade Varela; R. Schwemmer; P Somogyi; D Sonnick; R Stoica
The LHCb Experiment is a hadronic precision experiment at the LHC accelerator aimed at mainly studying b-physics by profiting from the large b-anti-b-production at LHC. The challenge of high trigger efficiency has driven the choice of a readout architecture allowing the main event filtering to be performed by a software trigger with access to all detector information on a processing farm based on commercial multi-core PCs. The readout architecture therefore features only a relatively relaxed hardware trigger with a fixed and short latency accepting events at 1 MHz out of a nominal proton collision rate of 30 MHz, and high bandwidth with event fragment assembly over Gigabit Ethernet. A fast central system performs the entire synchronization, event labelling and control of the readout, as well as event management including destination control, dynamic load balancing of the readout network and the farm, and handling of special events for calibrations and luminosity measurements. The event filter farm processes the events in parallel and reduces the physics event rate to about 2 kHz which are formatted and written to disk before transfer to the offline processing. A spy mechanism allows processing and reconstructing a fraction of the events for online quality checking. In addition a 5 Hz subset of the events are sent as express stream to offline for checking calibrations and software before launching the full offline processing on the main event stream.
ieee-npss real-time conference | 2010
Markus Frank; Jean-Christophe Garnier; C. Gaspar; R. Jacobsson; Beat Jost; Guoming Liu; N. Neufeld
The LHCb event-builder is implemented using a large Gigabit Ethernet network using a push-protocol for a single stage read-out at a 1 MHz event injection rate. The destination assignment and dynamic load-balancing are facilitated by LHCbs Timing and Fast Control system. The assembly of event fragments is done on each event-filter farm node instead of having dedicated builder units. The design of the event-builder will be shortly described, followed by a description of the implementation, the protocol used and the performance during first data taking. The emphasis will be on the experience we gained during the running of such a large event-building network. We will discuss the problems we encountered and how we overcame them.
ieee-npss real-time conference | 2010
F. Alessio; O. Callot; Luis Alberto Granado Cardoso; B. Franek; M. Frank; Jean-Christophe Garnier; C. Gaspar; E. v. Hervijnen; R. Jacobsson; Beat Jost; N. Neufeld; R. Schwemmer
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Detector Control System, etc. LHCbs Run Control, the main interface used by the experiments operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented.
ieee-npss real-time conference | 2009
Olivier Callot; Markus Frank; Jean-Christophe Garnier; C. Gaspar; Guoming Liu; N. Neufeld; Andrew Camero Smith; Daniel Sonnick; Alba Sambade Varela
The High Level Trigger (HLT) and Data Acquisition System select about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are consolidated into files in onsite storage and then sent to permanent storage for subsequent analysis on the Grid. For local and full-chain tests a method to exercise the data-flow through the High Level Trigger is needed in the absence of real data. In order to test the system as much as possible under identical conditions as for data-taking, the solution would be to inject data at the input of the HLT at a minimum rate of 2 kHz. This is done via a software implementation of the trigger system which sends data to the HLT. The application has to simulate that the data it sends come from real LHCb readout-boards. Data can come from several input streams, which are selected according to probabilities or frequencies. Therefore the emulator offers runs which are not only identical data-flows coming from a sequence on tape, but physics-like pseudo-indeterministic data-flow, including lumi events and candidate b-quark events. Both simulation data and previously recorded real data can be re-played through the system in this manner. As the data rate is high (100 MB/s), care has been taken to optimize the emulator for throughput from the Storage Area Network. The emulator can be run in stand-alone mode, but even more interesting is that it can emulate any partition of LHCb in parallel with the real hardware partition. In this mode it is fully integrated into the standard run-control. The architecture, implementation, and performance results of the emulator and full tests will be presented. This emulator is a crucial part of the ongoing data-challenges in LHCb. Results from these Full System Integration Tests (FEST) will be presented, which helped to verify and benchmark the entire LHCb data-flow.
Journal of Physics: Conference Series | 2010
M. Frank; Jean-Christophe Garnier; C. Gaspar; Guoming Liu; N. Neufeld; A S Varela
The High Level Trigger (HLT) and Data Acquisition (DAQ) system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are consolidated into files on an onsite storage and then sent to permanent storage for subsequent analysis on the Grid. For local and full-chain tests a method to exercise the data-flow through the High Level Trigger when there are no actual data is needed. In order to test the system as much as possible under identical conditions as for data-taking the solution is to inject data at the input of the HLT at a minimum rate of 2 kHz. This is done via a software implementation of the trigger system which sends data to the HLT. The application has to simulate that the data it sends come from real LHCb readout-boards. Both simulation data and previously recorded real data can be re-played through the system in this manner. As the data rate is high (100 MB/s), care has been taken to optimise the emulator for throughput from the Storage Area Network (SAN). The emulator can be run in stand-alone mode or run as a pseudo-subdetector of LHCb, allowing for use of all the standard run-control tools. The architecture, implementation and performance of the emulator will be presented.
5th Int. Particle Accelerator Conf. (IPAC'14), Dresden, Germany, June 15-20, 2014 | 2014
Jean-Christophe Garnier; Damien Anderson; Maxime Audrain; Matei Dragu; Kajetan Fuchsberger; Arkadiusz Gorzawski; Mateusz Koza; Kamil Krol; Kamil Misiowiec; Konstantinos Stamos; Markus Zerlauth
The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.
Journal of Physics: Conference Series | 2011
Enrico Bonaccorsi; Juan Manuel Caicedo Carvajal; Jean-Christophe Garnier; Guoming Liu; N. Neufeld; R. Schwemmer
The LHCb Data Acquisition system will be upgraded to address the requirements of a 40 MHz readout of the detector. It is not obvious that a simple scale-up of the current system will be able to meet these requirements. In this work we are therefore re-evaluating various architectures and technologies using a uniform test-bed and software framework. Infiniband is a rather uncommon technology in the domain of High Energy Physics data acquisition. It is currently mainly used in cluster based architectures. It has however interesting features which justify our interest : large bandwidth with low latency, a minimal overhead and a rich protocol suite. An InfiniBand test-bed has been and set-up, and the purpose is to have a software interface between the core software of the event-builder and the software related to the communication protocol. This allows us to run the same event-builder over different technologies for comparisons. We will present the test-bed architectures, and the performance of the different entities of the system, sources and destinations, according to their implementation. These results will be compared with 10 Gigabit Ethernet testbed results.
ieee-npss real-time conference | 2010
Jean-Christophe Garnier; N. Neufeld; S. S. Cherukuwada
LHCb aims to use its O(20000) CPU cores in the high level trigger (HLT) and its 120 TB Online storage system for data reprocessing during LHC shutdown periods. These periods can last a few days for technical maintenance or only a few hours during beam interfill gaps. These jobs run on files which are staged in from tape storage to the local storage buffer. The result are again one or more files. Efficient file writing and reading is essential for the performance of the system. Rather than using a traditional shared file-system such as NFS or CIFS we have implemented a custom, light-weight, non-Posix network file-system for the handling of these files. Streaming this file-system for the data-access allows to obtain high performance, while at the same time keep the resource consumption low and add nice features not found in NFS such as high-availability, transparent fail-over of the read and write service. The writing part of this streaming service is in successful use for the Online, real-time writing of the data during normal data acquisition operation. The network file system relies on a commercial file system which manages the data on persistent disk storage. The implementation together with performance figures are presented.
15th Int. Conf. on Accelerator and Large Experimental Physics Control Systems (ICALEPCS'15), Melbourne, Australia, 17-23 October 2015 | 2015
Jean-Christophe Garnier; Cesar Aguilera-Padilla; Serhiy Boychenko; Matei Dragu; Marc-Antoine Galilée; Mateusz Koza; Kamil Krol; T.Martins Ribeiro; R.Orlandi; Matthias Poeschl; Markus Zerlauth