Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where L. Masetti is active.

Publication


Featured researches published by L. Masetti.


Journal of Instrumentation | 2013

10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network

G. Bauer; Tomasz Bawej; Ulf Behrens; J. G. Branson; Olivier Chaze; Sergio Cittolin; Jose Antonio Coarasa; G-L Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; R. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Wojciech Ozga; C. Paus; Andrea Petrucci

For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.


Journal of Physics: Conference Series | 2011

The data-acquisition system of the CMS experiment at the LHC

G. Bauer; Barbara Beccati; U Behrens; K Biery; Olivier Bouffet; J. G. Branson; Sebastian Bukowiec; E. Cano; H Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; D Hatton; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; R. Moser; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci

The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.


Journal of Physics: Conference Series | 2012

The CMS High Level Trigger System: Experience and Future Development

G. Bauer; U Behrens; M Bowen; J. G. Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; A Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.


Journal of Physics: Conference Series | 2014

The new CMS DAQ system for LHC operation after 2014 (DAQ2)

Gerry Bauer; Tomasz Bawej; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Jose Antonio Coarasa; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Wojciech Ozga; Christoph Paus; Andrea Petrucci

The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.


Journal of Physics: Conference Series | 2014

10 Gbps TCP/IP streams from the FPGA for high energy physics

Gerry Bauer; Tomasz Bawej; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Jose Antonio Coarasa; Georgiana-Lavinia Darlea; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; Carlos Nunez-Barranco-Fernandez; V. O'Dell; Luciano Orsini; Wojciech Ozga; Christoph Paus; Andrea Petrucci

The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.


Journal of Physics: Conference Series | 2008

The CMS Tracker Control System

A Dierlamm; G H Dirkes; Manuel Fahrer; M Frey; F Hartmann; L. Masetti; O Militaru; S Y Shah; R Stringer; A. Tsirou

The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 104 power supply parameters, about 103 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 105 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention.


nuclear science symposium and medical imaging conference | 2015

The CMS Timing and Control Distribution System

Jeroen Hegeman; Jean-Marc Andre; Ulf Behrens; James G Branson; Olivier Chaze; Sergio Cittolin; Georgiana-Lavinia Darlea; Christian Deldicque; Z. Demiragli; M. Dobson; S. Erhan; J. Fulcher; Dominique Gigi; F. Glege; G. Gomez-Ceballos; Magnus Hansen; A. Holzner; Raul Jimenez-Estupiñán; L. Masetti; F. Meijers; E. Meschi; Remigius K. Mommsen; S. Morovic; V. O'Dell; Luciano Orsini; Christoph Paus; M. Pieri; Attila Racz; H. Sakulin; C. Schwick

The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. The new TCDS system and its components will be described and results from the first operational experience with the TCDS in CMS will be shown.


IEEE Transactions on Nuclear Science | 2012

First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies

Gerry Bauer; Barbara Beccati; Ulf Behrens; K. Biery; James G Branson; Sebastian Bukowiec; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa Perez; Christian Deldicque; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Derek Hatton; Yi Ling Hwong; Constantin Loizides; Frank Ma; L. Masetti; F. Meijers; E. Meschi; Andreas Meyer; Remigius K. Mommsen; R. Moser; V. O'Dell; A. Oh; Luciano Orsini

Run control systems of modern high-energy particle physics experiments have requirements similar to those of todays Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERNs Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.


Journal of Physics: Conference Series | 2012

Distributed error and alarm processing in the CMS data acquisition system

G. Bauer; U Behrens; M Bowen; J. G. Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; A Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.


Journal of Physics: Conference Series | 2012

Operational experience with the CMS Data Acquisition System

G. Bauer; U Behrens; M Bowen; J. G. Branson; Sebastian Bukowiec; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; M. Dobson; Aymeric Dupont; S. Erhan; A Flossdorf; Dominique Gigi; F. Glege; Robert Gomez-Reino; C. Hartl; Jeroen Hegeman; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci; M. Pieri; G. Polese; Attila Racz

The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider runs of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.

Collaboration


Dive into the L. Masetti's collaboration.

Top Co-Authors

Avatar

S. Erhan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Holzner

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge