Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Gutleber is active.

Publication


Featured researches published by J. Gutleber.


Cluster Computing | 2002

Software Architecture for Processing Clusters Based on I2O

J. Gutleber; Luciano Orsini

Mainstream computing equipment and the advent of affordable multi-Gigabit communication technology permit us to address data acquisition and processing problems with clusters of COTS machinery. Such networks typically contain heterogeneous platforms, real-time partitions and even custom devices. Vital overall system requirements are high efficiency and flexibility. In preceding projects we experienced the difficulties to meet both requirements at once. Intelligent I/O (I2O) is an industry specification that defines a uniform messaging format and execution environment for hardware and operating system independent device drivers in systems with processor based communication equipment. Mapping this concept to a distributed computing environment and encapsulating the details of the specification into an application-programming framework allow us to provide architectural support for (i) efficient and (ii) extensible cluster operation. This paper portrays our view of applying I2O to high-performance clusters. We demonstrate the feasibility of this approach and report on the efficiency of our XDAQ software framework for distributed data acquisition systems.


Computer Physics Communications | 2003

Towards a homogeneous architecture for high-energy physics data acquisition systems

J. Gutleber; S. Murray; Luciano Orsini

Data acquisition systems are mission-critical components in high-energy physics experiments. They are embedded in an environment of custom electronics, and are frequently characterized by high performance requirements. With the advent of powerful mainstream computing platforms and affordable high-speed networking equipment, system cost and time to completion can be significantly reduced. There still exists a considerable effort in custom software developments to build these systems and make them running efficiently. Therefore we strive for a software architecture flexible and robust enough to be usable in different system configurations and deployment cases. The software should cover the largest possible application domain and provide a practical balance between efficiency and flexibility. This article pinpoints the requirements imposed on such an on-line software infrastructure and sheds light on a viable design approach. As such, this article aims at laying out the foundations for a broader understanding of the importance for fostering a homogeneous architecture for high-energy physics data acquisition.


ieee-npss real-time conference | 2007

CMS DAQ Event Builder Based on Gigabit Ethernet

Gerry Bauer; Vincent Boyer; James G Branson; Angela Brett; E. Cano; Andrea Carboni; Marek Ciganek; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; Michele Gulmini; Esteban Gutierrez Mlot; J. Gutleber; C. Jacobs; Jungin Kim; M. Klute; Elliot Lipeles; Juan Antonio Lopez Perez; Gaetano Maron; F. Meijers; E. Meschi; Roland Moser; S. Murray; Alexander Oh; Luciano Orsini; Christoph Paus; Andrea Petrucci; M. Pieri

The CMS Data Acquisition System is designed to build and Alter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection.


Journal of Physics: Conference Series | 2010

The CMS data acquisition system software

Gerry Bauer; U Behrens; K. Biery; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh; Luciano Orsini; V Patras; Christoph Paus

The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.


Journal of Instrumentation | 2009

Commissioning of the CMS High Level Trigger

L Agostino; Gerry Bauer; Barbara Beccati; Ulf Behrens; J Berryhil; K. Biery; T. Bose; Angela Brett; James G Branson; E. Cano; H.W.K. Cheung; Marek Ciganek; Sergio Cittolin; Jose Antonio Coarasa; B. Dahmes; Christian Deldicque; E Dusinberre; S. Erhan; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D. Hatton; J Laurens; C. Loizides; F. Meijers; E. Meschi; A. Meyer; R. Mommsen; R. Moser

The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.


Computer Physics Communications | 2001

The CMS event builder demonstrator and results with Myrinet

G. Antchev; E. Cano; Sergio Cittolin; S. Erhan; B. Faure; Dominique Gigi; J. Gutleber; C. Jacobs; F. Meijers; E. Meschi; A. Ninane; Luciano Orsini; Lucien Pollet; Attila Racz; D. Samyn; N. Sinanis; W. Schleifer; P. Sphicas

Abstract The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been setup to study a small-scale (16×16) event builder based on PCs running Linux connected to Myrinet and Ethernet switches. A detailed study of the Myrinet switch performance has been performed for various traffic conditions, including the behaviour of composite switches. Results from event building studies are presented, including measurements on throughput, overhead and scaling. Traffic shaping techniques have been implemented and the effect on the event building performance has been investigated. The paper reports on performances and maximum event rate obtainable using custom software, not described, for the Myrinet control program and the low-level communication layer, implemented in a driver for Linux. A high performance sender is emulated by creating a dummy buffer that remains resident in the network interface and moving from the host only the first 64 bytes used by the event building protocol. An approximate scaling in N is presented assuming a balanced system where each source sends on average data to all destinations with the same rate.


ieee npss real time conference | 1999

The CMS event builder demonstrator based on Myrinet

G. Antchev; E. Cano; S. Chatelier; Sergio Cittolin; S. Erhan; Dominique Gigi; J. Gutleber; C. Jacobs; F. Meijers; R. Nicolau; Luciano Orsini; Lucien Pollet; Attila Racz; D. Samyn; N. Sinanis; P. Sphicas

The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been set up to study a small-scale (8/spl times/8) event builder based on a Myrinet switch. Measurements are presented on throughput, overhead and scaling for various traffic conditions. Results are shown on event building with a push architecture.


Journal of Physics: Conference Series | 2010

Monitoring the CMS data acquisition system

Gerry Bauer; U Behrens; K. Biery; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh; Luciano Orsini; V Patras; Christoph Paus

The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP [15,22]. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols [10,12] for efficient data transmission [11,13,14] and serving data in multiple data formats.


Archive | 2003

FEDkit: a design reference for CMS data acquisition inputs

V. Brigljevic; G. Bruno; E. Cano; Sergio Cittolin; S. Erhan; Dominique Gigi; F. Glege; R. Gomez-Reino Garrido; Michele Gulmini; J. Gutleber; C. Jacobs; M. Kozlovszky; H. Larsen; F. Meijers; E. Meschi; S. Murray; Alexander Oh; Luciano Orsini; Lucien Pollet; Attila Racz; D. Samyn; P. Scharff-Hansen; C. Schwick; P. Sphicas; Joao Varela

CMS has adopted S-LINK64 [1] as the standard interface between the detector front end readout and the central Data Acquisition (DAQ) system. The S-LINK64 is a specification of a FIFO-like interface. This includes mechanical descriptions of connector and daughter board format and electrical signal definition. The hardware/software package described in this paper (FEDkit) emulates the central DAQ side of this interface at the data rate required by the final DAQ system. The performance, integration with the CMS DAQ software framework, and plans for future developments for the DAQ input interface are also presented.


Journal of Physics: Conference Series | 2010

The CMS online cluster: IT for a large data acquisition and control cluster

Gerry Bauer; B Beccati; U Behrens; K. Biery; Angela Brett; James G Branson; E. Cano; Harry Cheung; Marek Ciganek; Sergio Cittolin; J A Coarasa; C Deldicque; E Dusinberre; S. Erhan; F Fortes Rodrigues; Dominique Gigi; F. Glege; Robert Gomez-Reino; J. Gutleber; D Hatton; J F Laurens; C Loizides; J A Lopez Perez; F. Meijers; E. Meschi; A Meyer; R Mommsen; R Moser; V O'Dell; Alexander Oh

The CMS online cluster consists of more than 2000 computers running about 10000 application instances. These applications implement the control of the experiment, the event building, the high level trigger, the online database and the control of the buffering and transferring of data to the Central Data Recording at CERN. In this paper the IT solutions employed to fulfil the requirements of such a large cluster are revised. Details are given on the chosen network structure, configuration management system, monitoring infrastructure and on the implementation of the high availability for the services and infrastructure.

Collaboration


Dive into the J. Gutleber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Erhan

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge