Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where G. Lehmann is active.

Publication


Featured researches published by G. Lehmann.


ieee-npss real-time conference | 2005

ATLAS DataFlow: the read-out subsystem, results from trigger and data-acquisition system testbed studies and from modeling

J. C. Vermeulen; M. Abolins; I. Alexandrav; A. Amorim; A. Dos Anjos; E. Badescu; N. Barros; H. P. Beck; R. E. Blair; D. Burckhart-Chromek; M. Caprini; M. D. Ciobotaru; A. Corso-Radu; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; Gary Drake; Y. Ermoline; Roberto Ferrari; M. L. Ferrer; David Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; M. Gruwe; S. Haas; W. Haberichter

In the ATLAS experiment at the LHC, the output of readout hardware specific to each subdetector will be transmitted to buffers, located on custom made PCI cards (ROBINs). The data consist of fragments of events accepted by the first-level trigger at a maximum rate of 100 kHz. Groups of four ROBINs will be hosted in about 150 read-out subsystem (ROS) PCs. Event data are forwarded on request via Gigabit Ethernet links and switches to the second-level trigger or to the event builder. In this paper a discussion of the functionality and real-time properties of the ROS is combined with a presentation of measurement and modeling results for a testbed with a size of about 20% of the final DAQ system. Experimental results on strategies for optimizing the system performance, such as utilization of different network architectures and network transfer protocols, are presented for the testbed, together with extrapolations to the full system


ieee-npss real-time conference | 2005

The ROD Crate DAQ of the ATLAS data acquisition system

S. Gameiro; G. Crone; Roberto Ferrari; D. Francis; B. Gorini; M. Gruwe; M. Joos; G. Lehmann; L. Mapelli; A. Misiejuk; E. Pasqualucci; J. Petersen; R. Spiwoks; L. Tremblet; G. Unel; W. Vandelli; Y. Yasu

In the ATLAS experiment at the LHC, the ROD Crate DAQ provides a complete framework to implement data acquisition functionality at the boundary between the detector specific electronics and the common part of the data acquisition system. Based on a plugin mechanism, it allows selecting and using common services (like data output and data monitoring channels) and developing simple libraries to control, monitor, acquire and/or emulate detector specific electronics. Providing also event building functionality, the ROD Crate DAQ is intended to be the main data acquisition tool for the first phase of detector commissioning. This paper presents the design, functionality and performance of the ROD Crate DAQ and its usage in the ATLAS DAQ and during detector tests


ieee npss real time conference | 2004

The base-line DataFlow system of the ATLAS trigger and DAQ

H. Beck; M. Abolins; A. Dos Anjos; M. Barisonzi; M. Beretta; R. E. Blair; J. A. Bogaerts; H. Boterenbrood; D. Botterill; M. D. Ciobotaru; E.P. Cortezon; R. Cranfield; G. Crone; J. Dawson; R. Dobinson; Y. Ermoline; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; P. Golonka; B. Gorini; B. Green; M. Gruwe; S. Haas; C. Haeberli; Y. Hasegawa; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones

The base-line design and implementation of the ATLAS DAQ DataFlow system is described. The main components of the DataFlow system, their interactions, bandwidths, and rates are discussed and performance measurements on a 10% scale prototype for the final ATLAS TDAQ DataFlow system are presented. This prototype is a combination of custom design components and of multithreaded software applications implemented in C++ and running in a Linux environment on commercially available PCs interconnected by a fully switched gigabit Ethernet network.


ieee-npss real-time conference | 2007

Performance of the final Event Builder for the ATLAS Experiment

H. P. Beck; M. Abolins; A. Battaglia; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft; S. Klous

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three level trigger system, which reduces the initial bunch crossing rate of 40 MHz at its first two trigger levels (LVL1+LVL2) to ~3 kHz. At this rate the Event-Builder collects the data from all read-out system PCs (ROSs) and provides fully assembled events to the the event-filter (EF), which is the third level trigger, to achieve a further rate reduction to ~ 200 Hz for permanent storage. The event-builder is based on a farm of O(100) PCs, interconnected via gigabit Ethernet to O(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs and one third of the event-builder PCs are already installed and commissioned. We report on performance tests on this initial system, which show promising results to reach the final data throughput required for the ATLAS experiment.


IEEE Transactions on Nuclear Science | 2006

The ROD crate DAQ software framework of the ATLAS data acquisition system

S. Gameiro; G. Crone; R Ferrari; D. Francis; B. Gorini; M. Gruwe; M. Joos; G. Lehmann; L. Mapelli; A. Misiejuk; E. Pasqualucci; J. Petersen; R. Spiwoks; L. Tremblet; G. Unel; W. Vandelli; Y. Yasu

In the ATLAS experiment at the LHC, the ROD Crate DAQ provides a complete software framework to implement data acquisition functionality at the boundary between the detector specific electronics and the common part of the data acquisition system. Based on a plugin mechanism, it allows selecting and using common services (like data output and data monitoring channels) and developing software to control and acquire data from detector specific modules providing the infrastructure for control, monitoring and calibration. Including also event building functionality, the ROD Crate DAQ is intended to be the main data acquisition tool for the first phase of detector commissioning. This paper presents the design, functionality and performance of the ROD Crate DAQ and its usage in the ATLAS data acquisition system and during detector tests.


ieee-npss real-time conference | 2005

Deployment and use of the ATLAS DAQ in the combined test beam

S. Gadomski; M. Abolins; I. Alexandrov; A. Amorim; C. Padilla-Aranda; E. Badescu; N. Barros; H. P. Beck; R. E. Blair; D. Burckhart-Chromek; M. Caprini; M. Ciobotaru; P. Conde-Muíño; A. Corso-Radu; M. Diaz-Gomez; R. Dobinson; M. Dobson; Roberto Ferrari; M. L. Ferrer; David Francis; S. Gameiro; B. Gorini; M. Gruwe; S. Haas; C. Haeberli; R. Hauser; R. E. Hughes-Jones; M. Joos; A. Kazarov; D. Klose

The ATLAS collaboration at CERN operated a combined test beam (CTB) from May until November 2004. The prototype of ATLAS data acquisition system (DAQ) was used to integrate other subsystems into a common CTB setup. Data were collected synchronously from all the ATLAS detectors, which represented nine different detector technologies. Electronics and software of the first level trigger were used to trigger the setup. Event selection algorithms of the high level trigger were integrated with the system and were tested with real detector data. A possibility to operate a remote event filter farm synchronized with ATLAS TDAQ was also tested. Event data, as well as detectors conditions data were made available for offline analysis


IEEE Transactions on Nuclear Science | 2008

The ATLAS Event Builder

W. Vandelli; M. Abolins; A. Battaglia; H. P. Beck; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three-level trigger system, which, at its first two trigger levels (LVL1+LVL2), reduces the initial bunch crossing rate of 40 MHz to ~3 kHz. At this rate, the Event Builder collects the data from the readout system PCs (ROSs) and provides fully assembled events to the Event Filter (EF). The EF is the third trigger level and its aim is to achieve a further rate reduction to ~200 Hz on the permanent storage. The Event Builder is based on a farm of 0(100) PCs, interconnected via a Gigabit Ethernet to 0(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs, and substantial fractions of the Event Builder and Event Filter PCs have been installed and commissioned. We report on performance tests on this initial system, which is capable of going beyond the required data rates and bandwidths for Event Building for the ATLAS experiment.


IEEE Transactions on Nuclear Science | 2008

Performance of the Final Event Builder for the ATLAS Experiment

H. P. Beck; M. Abolins; A. Battaglia; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft; S. Klous

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment by a three level trigger system, which reduces the initial bunch crossing rate of 40 MHz at its first two trigger levels (LVL1+LVL2) to ~3 kHz. At this rate the Event-Builder collects the data from all Read-Out system PCs (ROSs) and provides fully assembled events to the the Event-Filter (EF), which is the third level trigger, to achieve a further rate reduction to ~200 Hz for permanent storage. The Event-Builder is based on a farm of 0 (100) PCs, interconnected via Gigabit Ethernet to 0 (150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs and one third of the Event-Builder PCs are already installed and commissioned. Performance measurements have been exercised on this initial system, which show promising results that the required final data rates and bandwidth for the ATLAS event builder are in reach.


Archive | 2004

Performance of the ATLAS DAQ DataFlow system

G. Unel; E. Pasqualucci; M. Gruwe; H. Beck; H. Zobernig; R. Ferrari; M. Abolins; D. Prigent; K. Nakayoshi; Pérez-Réale; R. Hauser; G. Crone; A. J. Lankford; A. Kaczmarska; D. Botterill; Fred Wickens; Y. Nagasaka; L. Tremblet; R. Spiwoks; E Palencia-Cortezon; S. Gameiro; P. Golonka; R. E. Blair; G. Kieft; J. L. Schlereth; J. Petersen; J. A. Bogaerts; A. Misiejuk; Y. Hasegawa; M. Le Vine

The baseline DAQ architecture of the ATLAS Experiment at LHC is introduced and its present implementation and the performance of the DAQ components as measured in a laboratory environment are summarized. It will be shown that the discrete event simulation model of the DAQ system, tuned using these measurements, does predict the behaviour of the prototype configurations well, after which, predictions for the final ATLAS system are presented. With the currently available hardware and software, a system using ~140 ROSs with 3GHz single cpu, ~100 SFIs with dual 2.4 GHz cpu and ~500 L2PUs with dual 3.06 GHz cpu can achieve the dataflow for 100 kHz Level 1 rate, with 97% reduction at Level 2 and 3 kHz event building rate. ATLAS DATAFLOW SYSTEM The 40 MHz collision rate at the LHC produces about 25 interactions per bunch crossing, resulting in terabytes of data per second, which has to be handled by the detector electronics and the trigger and DAQ system [1]. A Level1 (L1) trigger system based on custom electronics will reduce the event rate to 75 kHz (upgradeable to 100 kHz – this paper uses the more demanding 100 kHz). The ________________________________________ #. Also affiliated with University of California at Irvine, Irvine, USA *. On leave from Henryk Niewodniczanski Institute of Nucl. Physics, Cracow +. Presently at CERN, Geneva, Switzerland 91 DAQ system is responsible for: the readout of the detector specific electronics via 1630 point to point read-out links (ROL) hosted by Readout Subsystems (ROS), the collection and provision of “Region of Interest data” (ROI) to the Level2 (L2) trigger, the building of events accepted by the L2 trigger and their subsequent input to the Event Filter (EF) system where they are subject to further selection criteria. The DAQ also provides the functionality for the configuration, control, information exchange and monitoring of the whole ATLAS detector readout [2]. The applications in the DAQ software dealing with the flow of event and monitoring data as well as the trigger information are called “DataFlow” applications. The DataFlow applications up to the EF input and their interactions are shown in Figure 1. Figure 1 ATLAS DAQ-DataFlow applications and their interactions (up to the EventFilter) SFI L2PU L2SV DFM pROS ROS ROI data


IEEE Transactions on Nuclear Science | 2006

Access management in the ATLAS TDAQ

John Erik Sloper; M. Leahu; M. Dobson; G. Lehmann

In the trigger and data acquisition (TDAQ) system for the ATLAS project authorization of users will be an important task. The main goal of the authorization will be to reduce the chance of potentially dangerous actions being made by mistake. An access management (AM) component is being developed within the TDAQ to handle these issues. This paper presents the design and implementation of the component. It also describes the authorization model used and how authorization data is stored and administrated for the system

Collaboration


Dive into the G. Lehmann's collaboration.

Top Co-Authors

Avatar

M. Abolins

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

R. E. Blair

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

G. Crone

University College London

View shared research outputs
Top Co-Authors

Avatar

R. Hauser

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge