Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Marc Andre is active.

Publication


Featured researches published by Jean-Marc Andre.


nuclear science symposium and medical imaging conference | 2015

The CMS Timing and Control Distribution System

Jeroen Hegeman; Jean-Marc Andre; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Georgiana-Lavinia Darlea; Christian Deldicque; Z. Demiragli; M. Dobson; S. Erhan; J. Fulcher; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Magnus Hansen; A. Holzner; Raul Jimenez-Estupiñán; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; V. O'Dell; Luciano Orsini; Christoph Paus; M. Pieri; Attila Racz; H. Sakulin; C. Schwick

The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. The new TCDS system and its components will be described and results from the first operational experience with the TCDS in CMS will be shown.


Proceedings of International Symposium on Grids and Clouds (ISGC) 2016 — PoS(ISGC 2016) | 2017

Opportunistic usage of the CMS online cluster using a cloud overlay

Olivier Chaze; Jean-Marc Andre; Anastasios Andronidis; Ulf Behrens; James G Branson; Philipp Maximilian Brummer; Cristian Contescu; Sergio Cittolin; Benjamin Gordon Craigs; Georgiana-Lavinia Darlea; Christian Deldicque; Z. Demiragli; M. Dobson; Nicolas Doualot; S. Erhan; J. Fulcher; Dominique Gigi; F. Glege; Guillelmo Gomez-Ceballos; Jeroen Hegeman; A. Holzner; Raul Jimenez-Estupiñán; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; V. O'Dell; Luciano Orsini; Christoph Paus

After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres.


Proceedings of Topical Workshop on Electronics for Particle Physics — PoS(TWEPP-17) | 2018

The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP

Dominique Gigi; Petia Petrova; Attila Racz; Samuel Johan Orn; T. Reis; Christian Deldicque; Michail Vougioukas; Michael Lettrich; Cristian Contescu; E. Meschi; Ioannis Papakrivopoulos; M. Dobson; V. O'Dell; F. Glege; Maciej Gladki; Dainius Simelevicius; James G Branson; A. Holzner; H. Sakulin; Sergio Cittolin; Andrea Petrucci; F. Meijers; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot; C. Schwick; J. Fulcher; Jeroen Hegeman

In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation


Proceedings of Topical Workshop on Electronics for Particle Physics — PoS(TWEPP-17) | 2018

CMS DAQ current and future hardware upgrades up to post Long Shutdown 3 (LS3) times

Attila Racz; Petia Petrova; Dominique Gigi; Samuel Johan Orn; T. Reis; Christian Deldicque; Michail Vougioukas; Michael Lettrich; Cristian Contescu; E. Meschi; Ioannis Papakrivopoulos; M. Dobson; V. O'Dell; F. Glege; Maciej Gladki; Dainius Simelevicius; James G Branson; A. Holzner; H. Sakulin; Sergio Cittolin; Andrea Petrucci; F. Meijers; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot; C. Schwick; J. Fulcher; Jeroen Hegeman

Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013-2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns. Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared. This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution. In particular, post LS3 DAQ architectures are focused upon.


Journal of Physics: Conference Series | 2018

DAQExpert - An expert system to increase CMS data-taking efficiency

Jean-Marc Andre; Petia Petrova; D Gigi; Attila Racz; Samuel Johan Orn; A. Holzner; T. Reis; Christian Deldicque; Michail Vougioukas; Michael Lettrich; Cristian Contescu; E. Meschi; Ioannis Papakrivopoulos; F Meijers; M. Dobson; V. O'Dell; F. Glege; Dainius Simelevicius; Georgiana Lavinia Darlea; Christoph Paus; Z. Demiragli; H. Sakulin; D. Rabady; Jeroen Hegeman; Andrea Petrucci; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot

The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of datataking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.


Journal of Physics: Conference Series | 2017

The CMS Data Acquisition - Architectures for the Phase-2 Upgrade

Jean-Marc Andre; Petia Petrova; Dominique Gigi; Attila Racz; A. Holzner; T. Reis; Christian Deldicque; Cristian Contescu; E. Meschi; Philipp Maximilian Brummer; F Meijers; M. Dobson; Raul Jimenez Estupinan; F. Glege; Dainius Simelevicius; James G Branson; Christoph Paus; Z. Demiragli; H. Sakulin; Jeroen Hegeman; Jonathan F Fulcheri; V. O'Dell; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot; C. Schwick; adki; Luciano Orsini

The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 10 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ ”data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). Presented at CHEP 2016 22nd International Conference on Computing in High Energy and Nuclear Physics 1) DESY, Hamburg, Germany 2) CERN, Geneva, Switzerland 3) University of California, Los Angeles, Los Angeles, California, USA 4) University of California, San Diego, San Diego, California, USA 5) FNAL, Chicago, Illinois, USA 6) Massachusetts Institute of Technology, Cambridge, Massachusetts, USA The CMS Data Acquisition Architectures for the Phase-2 Upgrade J-M Andre5, U Behrens1, J Branson4, P Brummer2, O Chaze2, S Cittolin4, C Contescu5, B G Craigs2, G-L Darlea6, C Deldicque2, Z Demiragli6, M Dobson2, N Doualot5, S Erhan3, J F Fulcher2, D Gigi2, M G ladki2, F Glege2, G Gomez-Ceballos6, J Hegeman2, A Holzner4, M Janulis2a, R Jimenez-Estupiñán2, L Masetti2, F Meijers2, E Meschi2, R K Mommsen5, S Morovic2, V O’Dell5, L Orsini2, C Paus6, P Petrova2, M Pieri4, A Racz2, T Reis2, H Sakulin2, C Schwick2, D Simelevicius2a, and P Zejdl5b 1 DESY, Hamburg, Germany 2 CERN, Geneva, Switzerland 3 University of California, Los Angeles, California, USA 4 University of California, San Diego, California, USA 5 FNAL, Batavia, Illinois, USA 6 Massachusetts Institute of Technology, Cambridge, Massachusetts, USA a also at Vilnius University, Vilnius, Lithuania b also at CERN, Geneva, Switzerland E-mail: [email protected] Abstract. The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 10 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ ”data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied. The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 10 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ ”data center” are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.


Journal of Physics: Conference Series | 2017

Performance of the CMS Event Builder

Jean-Marc Andre; Petia Petrova; Dominique Gigi; Attila Racz; A. Holzner; T. Reis; Lithuania; Christian Deldicque; Illinois; Cristian Contescu; E. Meschi; Philipp Maximilian Brummer; F Meijers; M. Dobson; Raul Jimenez Estupinan; F. Glege; Dainius Simelevicius; James G Branson; Christoph Paus; Z. Demiragli; H. Sakulin; Hamburg; Jeroen Hegeman; V. O'Dell; Remigius K. Mommsen; Chicago; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot

The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz. It transports event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The CMS DAQ system has been completely rebuilt during the first long shutdown of the LHC in 2013/14. The new DAQ architecture is based on state-ofthe-art network technologies for the event building. For the data concentration, 10/40 Gb/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR CLOS network has been chosen for the event builder. We report on the performance of the event builder system and the steps taken to exploit the full potential of the network technologies. Presented at CHEP 2016 22nd International Conference on Computing in High Energy and Nuclear Physics Performance of the CMS Event Builder J-M Andre5, U Behrens1, J Branson4, P Brummer2, O Chaze2, S Cittolin4, C Contescu5, B G Craigs2, G-L Darlea6, C Deldicque2, Z Demiragli6, M Dobson2, N Doualot5, S Erhan3, J F Fulcher2, D Gigi2, M G ladki2, F Glege2, G Gomez-Ceballos6, J Hegeman2, A Holzner4, M Janulis2a, R Jimenez-Estupiñán2, L Masetti2, F Meijers2, E Meschi2, R K Mommsen5, S Morovic2, V O’Dell5, L Orsini2, C Paus6, P Petrova2, M Pieri4, A Racz2, T Reis2, H Sakulin2, C Schwick2, D Simelevicius2a and P Zejdl5b 1 DESY, Hamburg, Germany 2 CERN, Geneva, Switzerland 3 University of California, Los Angeles, California, USA 4 University of California, San Diego, California, USA 5 FNAL, Chicago, Illinois, USA 6 Massachusetts Institute of Technology, Cambridge, Massachusetts, USA a also at Vilnius University, Vilnius, Lithuania b also at CERN, Geneva, Switzerland E-mail: [email protected] Abstract. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of O(100 GB/s) to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of O(100 GB/s) to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.


Journal of Physics: Conference Series | 2017

New operator assistance features in the CMS Run Control System

Jean-Marc Andre; Petia Petrova; Dominique Gigi; Attila Racz; A. Holzner; T. Reis; Christian Deldicque; Cristian Contescu; E. Meschi; Philipp Maximilian Brummer; F Meijers; M. Dobson; Raul Jimenez Estupinan; F. Glege; Dainius Simelevicius; James G Branson; Christoph Paus; Z. Demiragli; H. Sakulin; Jeroen Hegeman; Jonathan F Fulcheri; V. O'Dell; Remigius K. Mommsen; Mindaugas Janulis; M. Pieri; Ulf Behrens; Nicolas Doualot; C. Schwick; adki; Luciano Orsini

During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.


ieee npss real time conference | 2016

Performance of the new DAQ system of the CMS experiment for run-2

Jean-Marc Andre; Anastasios Andronidis; Ulf Behrens; James G Branson; Philipp Maximilian Brummer; Olivier Chaze; Cristian Contescu; Benjamin Gordon Craigs; Sergio Cittolin; Georgiana-Lavinia Darlea; Christian Deldicque; Z. Demiragli; M. Dobson; S. Erhan; J. Fulcher; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Jeroen Hegeman; A. Holzner; Raúl Jiménez-Estupiañán; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; V. O'Dell; Luciano Orsini; Christoph Paus; M. Pieri

The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.


15th Int. Conf. on Accelerator and Large Experimental Physics Control Systems (ICALEPCS'15), Melbourne, Australia, 17-23 October 2015 | 2015

Detector Controls Meets JEE on the Web

F. Glege; Jean-Marc Andre; Anastasios Andronidis; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; Guillelmo Gomez-Ceballos; Jeroen Hegeman; Oliver Holme; A. Holzner; Mindaugas Janulis; Raul Jimenez Estupinan; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri

Remote monitoring and controls has always been an important aspect of physics detector controls since it was available. Due to the complexity of the systems, the 24/7 running requirements and limited human resources, remote access to perform interventions is essential. The amount of data to visualize, the required visualization types and cybersecurity standards demand a professional, complete solution. Using the example of the integration of the CMS detector controls system into our ORACLE WebCenter infrastructure, the mechanisms and tools available for integration with controls systems shall be discussed. Authentication has been delegated to WebCenter and authorization been shared between web server and control system. Session handling exists in either system and has to be matched. Concurrent access by multiple users has to be handled. The underlying JEE infrastructure is specialized in visualization and information sharing. On the other hand, the structure of a JEE system resembles a distributed controls system. Therefore an outlook shall be given on tasks which could be covered by the web servers rather than the controls system.

Collaboration


Dive into the Jean-Marc Andre's collaboration.

Top Co-Authors

Avatar

A. Holzner

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Pieri

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge