Andrea Carboni
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Carboni.
ieee-npss real-time conference | 2007
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri
The CMS Data Acquisition System is designed to build and Alter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection.
Journal of Physics: Conference Series | 2008
Gerry Bauer; Vincent Boyer; J Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; V O'dell; S. Erhan; Dominique Gigi; F. Glege; R G-Reino; Michele Gulmini; J. Gutleber; Jungin Kim; M. Klute; E Lipeles; Juan Antonio Lopez Perez; G Maron; F. Meijers; E. Meschi; R. Moser; Esteban Gutierrez Mlot; S Murray; Alexander Oh; Luciano Orsini; C. Paus; A Petrucci; M Pieri
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control system was developed. This paper describes the architecture and the technology used to implement the Run Control system, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
ieee-npss real-time conference | 2007
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri
The Data Acquisition System of the Compact Muon Solenoid experiment at the Large Hadron Collider reads out event fragments of an average size of 2 kB from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. Back-pressure from the down-stream event-processing or variations in the size and rate of events may give rise to buffer overflows in the subdetectors front-end electronics, which would result in data corruption and would require a time-consuming re-sync procedure to recover. The Trigger-Throttling System protects against these buffer overflows. It provides fast feedback from any of the subdetector front-ends to the trigger so that the trigger can be throttled before buffers overflow. This paper reports on new performance measurements and on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major subdetectors. The on-going commissioning of the full-scale system is discussed.
Journal of Physics: Conference Series | 2008
Gerry Bauer; U. Behrens; Vincent Boyer; J. G. Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; V. O'Dell; S. Erhan; Dominique Gigi; F. Glege; R G-Reino; Michele Gulmini; J. Gutleber; J Hollar; David J. Lange; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; G. Maron; F. Meijers; E. Meschi; R. Moser; Esteban Gutierrez Mlot; S Murray; Alexander Oh; Luciano Orsini
The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.
IEEE Transactions on Nuclear Science | 2008
Anzar Afaq; W. Badgett; Gerry Bauer; K. Biery; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Harry Cheung; Marek Ciganek; Sergio Cittolin; William Dagenhart; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; J. Gutleber; C. Jacobs; Jin Cheol Kim; M. Klute; Jim Kowalkowski; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; Esteban Gutierrez Mlot
The CMS data acquisition (DAQ) system relies on a purely software driven high level trigger (HLT) to reduce the full Level 1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the filter farm and the procedures to validate the filtering code within the DAQ environment are described.
ieee-npss real-time conference | 2007
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jin Cheol Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri
The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GB/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Myrinet will be used for the communication from the underground frontend devices to the surface event building system. Gigabit Ethernet is deployed in the surface event building system. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder preseries installation at CERN are presented and the problems of wormhole routing networks are discussed.
Prepared for International Conference on Computing in High Energy and Nuclear Physics (CHEP 07), Victoria, BC, Canada, 2-7 Sep 2007 | 2007
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; V. O'Dell; S. Erhan; Dominique Gigi; U Kyungpook Natl.; Legnaro Infn
Presented at: International Conference on Computing in High Energy and Nuclear Physics, Victoria, Canada, Sep 02 - Sep 07, 2007 | 2009
Gerry Bauer; U. Behrens; Boyer; J. G. Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; O'dell; S. Erhan; Dominique Gigi; F. Glege; R Gomez-Reino; M. Gulmini; J. Gutleber; Jonathan Hollar; David J. Lange; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; G. Maron; F. Meijers; E. Meschi; R. Moser; Esteban Gutierrez Mlot; S Murray; Alexander Oh; Luciano Orsini
Nuclear Physics B - Proceedings Supplements | 2007
R Arcidiacono; Gerry Bauer; Vincent Boyer; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino Garrido; Michele Gulmini; J. Gutleber; C. Jacobs; Gaetano Maron; F. Meijers; E. Meschi; S. Murray; A. Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; Jonatan Piedra Gomez; M. Pieri; Lucien Pollet; Attila Racz; H. Sakulin; C. Schwick; K. Sumorok
Archive | 2007
Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; E. Gutierrez Mlot; J. Gutleber; C. Jacobs; Jin Cheol Kim; M. Klute; Elliot Lipeles; J. A. LopezPerez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri