Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marek Ciganek is active.

Publication


Featured researches published by Marek Ciganek.


ieee-npss real-time conference | 2007

CMS DAQ Event Builder Based on Gigabit Ethernet

Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri

The CMS Data Acquisition System is designed to build and Alter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection.


Journal of Physics: Conference Series | 2010

The CMS data acquisition system software

Gerry Bauer; U Behrens; K. Biery; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh; Luciano Orsini; V Patras; Christoph Paus

The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.


Journal of Instrumentation | 2009

Commissioning of the CMS High Level Trigger

L Agostino; Gerry Bauer; Barbara Beccati; Ulf Behrens; J Berryhil; K. Biery; T. Bose; Angela Brett; James G Branson; E. Cano; H.W.K. Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; B. Dahmes; Christian Deldicque; E Dusinberre; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D. Hatton; J Laurens; C. Loizides; F. Meijers; E. Meschi; A. Meyer; R. Mommsen; R. Moser

The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.


Journal of Physics: Conference Series | 2011

The data-acquisition system of the CMS experiment at the LHC

G. Bauer; Barbara Beccati; U Behrens; K Biery; Olivier Bouffet; J. G. Branson; Sebastian Bukowiec; E. Cano; H Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; Christian Deldicque; Aymeric Dupont; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; D Hatton; A. Holzner; Yi Ling Hwong; L. Masetti; F Meijers; E. Meschi; R K Mommsen; R. Moser; V O'Dell; Luciano Orsini; C. Paus; Andrea Petrucci

The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.


Journal of Physics: Conference Series | 2010

Monitoring the CMS data acquisition system

Gerry Bauer; U Behrens; K. Biery; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh; Luciano Orsini; V Patras; Christoph Paus

The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP [15,22]. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols [10,12] for efficient data transmission [11,13,14] and serving data in multiple data formats.


Journal of Physics: Conference Series | 2010

The CMS online cluster: IT for a large data acquisition and control cluster

Gerry Bauer; B Beccati; U Behrens; K. Biery; Angela Brett; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; C Loizides; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh

The CMS online cluster consists of more than 2000 computers running about 10000 application instances. These applications implement the control of the experiment, the event building, the high level trigger, the online database and the control of the buffering and transferring of data to the Central Data Recording at CERN. In this paper the IT solutions employed to fulfil the requirements of such a large cluster are revised. Details are given on the chosen network structure, configuration management system, monitoring infrastructure and on the implementation of the high availability for the services and infrastructure.


Journal of Physics: Conference Series | 2010

The CMS event builder and storage system

Gerry Bauer; B Beccati; U Behrens; K. Biery; Angela Brett; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; M. Klute; J-F Laurens; C Loizides; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R K Mommsen; R Moser; V O'Dell

The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector.


Journal of Physics: Conference Series | 2008

The run control system of the CMS experiment

Gerry Bauer; Vincent Boyer; J Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; V O'dell; S. Erhan; Dominique Gigi; F. Glege; R G-Reino; Michele Gulmini; J. Gutleber; Jungin Kim; M. Klute; E Lipeles; Juan Antonio Lopez Perez; G Maron; F. Meijers; E. Meschi; R. Moser; Esteban Gutierrez Mlot; S Murray; Alexander Oh; Luciano Orsini; C. Paus; A Petrucci; M Pieri

The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control system was developed. This paper describes the architecture and the technology used to implement the Run Control system, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.


ieee-npss real-time conference | 2007

The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN

Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri

The Data Acquisition System of the Compact Muon Solenoid experiment at the Large Hadron Collider reads out event fragments of an average size of 2 kB from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. Back-pressure from the down-stream event-processing or variations in the size and rate of events may give rise to buffer overflows in the subdetectors front-end electronics, which would result in data corruption and would require a time-consuming re-sync procedure to recover. The Trigger-Throttling System protects against these buffer overflows. It provides fast feedback from any of the subdetector front-ends to the trigger so that the trigger can be throttled before buffers overflow. This paper reports on new performance measurements and on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major subdetectors. The on-going commissioning of the full-scale system is discussed.


IEEE Transactions on Nuclear Science | 2012

First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies

Gerry Bauer; Barbara Beccati; Ulf Behrens; K. Biery; James G Branson; Sebastian Bukowiec; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa Perez; Christian Deldicque; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Derek Hatton; Yi Ling Hwong; Constantin Loizides; Frank Ma; L. Masetti; F. Meijers; E. Meschi; Andreas Meyer; Remigius K. Mommsen; R. Moser; V. O'Dell; A. Oh; Luciano Orsini

Run control systems of modern high-energy particle physics experiments have requirements similar to those of todays Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERNs Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.

Collaboration


Dive into the Marek Ciganek's collaboration.

Top Co-Authors

Avatar

S. Erhan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerry Bauer

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge