Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aymeric Dupont is active.

Publication


Featured researches published by Aymeric Dupont.


Journal of Instrumentation | 2013

10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network

G. Bauer; Tomasz Bawej; Ulf Behrens; J. G. Branson; Olivier Chaze; Sergio Cittolin; Jose Antonio Coarasa; G-L Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; R. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Wojciech Ozga; C. Paus; Andrea Petrucci

For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.


Journal of Physics: Conference Series | 2011

The data-acquisition system of the CMS experiment at the LHC

G. Bauer; Barbara Beccati; U Behrens; K Biery; Olivier Bouffet; J. G. Branson; Sebastian Bukowiec; E. Cano; H Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; D Hatton; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; R. Moser; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci

The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.


Journal of Physics: Conference Series | 2012

The CMS High Level Trigger System: Experience and Future Development

G. Bauer; U Behrens; M Bowen; J. G. Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; A Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.


Journal of Physics: Conference Series | 2014

The new CMS DAQ system for LHC operation after 2014 (DAQ2)

Gerry Bauer; Tomasz Bawej; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Jose Antonio Coarasa; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Wojciech Ozga; Christoph Paus; Andrea Petrucci

The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.


Journal of Physics: Conference Series | 2014

10 Gbps TCP/IP streams from the FPGA for high energy physics

Gerry Bauer; Tomasz Bawej; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Jose Antonio Coarasa; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Wojciech Ozga; Christoph Paus; Andrea Petrucci

The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.


Journal of Physics: Conference Series | 2012

Distributed error and alarm processing in the CMS data acquisition system

G. Bauer; U Behrens; M Bowen; J. G. Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; A Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.


Journal of Physics: Conference Series | 2012

Operational experience with the CMS Data Acquisition System

G. Bauer; U Behrens; M Bowen; J. G. Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; A Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider runs of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.


Journal of Physics: Conference Series | 2012

High availability through full redundancy of the CMS detector controls system

Gerry Bauer; Ulf Behrens; M Bowen; James G Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Alexander Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; V. O'Dell; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers in order to provide the required processing resources. An optimization of the system software and hardware architecture is under development to ensure redundancy of all the controlled subsystems and to reduce any downtime due to hardware or software failures. The new optimized structure is based mainly on powerful and highly reliable blade servers and makes use of a fully redundant approach, guaranteeing high availability and reliability. The analysis of the requirements, the challenges, the improvements and the optimized system architecture as well as its specific hardware and software solutions are presented.


Journal of Physics: Conference Series | 2011

An Analysis of the Control Hierarchy Modelling of the CMS Detector Control System

Yi Ling Hwong; Tim A. C. Willemse; Vincent J. J. Kusters; Gerry Bauer; Barbara Beccati; Ulf Behrens; K. Biery; Olivier Bouffet; James G Branson; Sebastian Bukowiec; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; A. Holzner; Derek Hatton; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; R. Moser; V. O'Dell; Luciano Orsini

The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system.


Journal of Physics: Conference Series | 2011

The LHC Compact Muon Solenoid experiment Detector Control System

G. Bauer; Barbara Beccati; U Behrens; K Biery; Olivier Bouffet; J. G. Branson; Sebastian Bukowiec; E. Cano; H Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; D Hatton; A Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; R. Moser; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci

The Compact Muon Solenoid (CMS) experiment at CERN is a multi-purpose experiment designed to exploit the physics of proton-proton collisions at the Large Hadron Collider collision energy (14TeV at centre of mass) over the full range of expected luminosities (up to 1034cm−2s−1). The CMS detector control system (DCS) ensures a safe, correct and efficient operation of the detector so that high quality physics data can be recorded. The system is also required to operate the detector with a small crew of experts who can take care of the maintenance of its software and hardware infrastructure. The subsystems size sum up to more than a million parameters that need to be supervised by the DCS. A cluster of roughly 100 servers is used to provide the required processing resources. A scalable approach has been chosen factorizing the DCS system as much as possible. CMS DCS has made clear a division between its computing resources and functionality by creating a computing framework allowing plugging in of functional components. DCS components are developed by the subsystems expert groups while the computing infrastructure is developed centrally. To ensure the correct operation of the detector, DCS organizes the communication between the accelerator and the experiment systems making sure that the detector is in a safe state during hazardous situations and is fully operational when stable conditions are present. This paper describes the current status of the CMS DCS focusing on operational aspects and the role of DCS in this communication.

Collaboration


Dive into the Aymeric Dupont's collaboration.

Researchain Logo
Decentralizing Knowledge