G. Lehmann Miotto
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by G. Lehmann Miotto.
Journal of Physics: Conference Series | 2015
J. Anderson; A. Borga; H. Boterenbrood; H. Chen; K. Chen; G. Drake; D. Francis; B. Gorini; Francesco Lanni; G. Lehmann Miotto; L. J. Levinson; J. Narevicius; Christian Plessl; A. Roich; S. Ryu; F. Schreuder; J. Schumacher; W. Vandelli; J. C. Vermeulen; J. Zhang
The ATLAS experiment at CERN is planning full deployment of a new unified optical link technology for connecting detector front end electronics on the timescale of the LHC Run 4 (2025). It is estimated that roughly 8000 GBT (GigaBit Transceiver) links, with transfer rates up to 10.24 Gbps, will replace existing links used for readout, detector control and distribution of timing and trigger information. A new class of devices will be needed to interface many GBT links to the rest of the trigger, data-acquisition and detector control systems. In this paper FELIX (Front End LInk eXchange) is presented, a PC-based device to route data from and to multiple GBT links via a high-performance general purpose network capable of a total throughput up to O(20 Tbps). FELIX implies architectural changes to the ATLAS data acquisition system, such as the use of industry standard COTS components early in the DAQ chain. Additionally the design and implementation of a FELIX demonstration platform is presented and hardware and software aspects will be discussed.
Journal of Instrumentation | 2016
J. Anderson; K. Bauer; A. Borga; H. Boterenbrood; H. Chen; K. Chen; G. Drake; M. Dönszelmann; D. Francis; D. Guest; B. Gorini; M. Joos; Francesco Lanni; G. Lehmann Miotto; L. J. Levinson; J. Narevicius; W. Panduro Vazquez; A. Roich; S. Ryu; F. Schreuder; J. Schumacher; W. Vandelli; J. C. Vermeulen; Daniel Whiteson; Weihao Wu; J. Zhang
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. The Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. The FELIX system, the design of the PCIe prototype card and the integration test results are presented in this paper.
Journal of Physics: Conference Series | 2012
A. Kazarov; G. Lehmann Miotto; L. Magnoni
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker to centralize all communication between modules. The result is an intelligent system able to extract and compute relevant information from the flow of operational data to provide real-time feedback to human experts who can promptly react when needed. The paper presents the design and implementation of the AAL project, together with the results of its usage as automated monitoring assistant for the ATLAS data taking infrastructure.
Journal of Instrumentation | 2016
J. Anderson; A. Borga; H. Boterenbrood; H. Chen; K. Chen; G. Drake; M. Dönszelmann; D. Francis; B. Gorini; Francesco Lanni; G. Lehmann Miotto; L. J. Levinson; J. Narevicius; A. Roich; Soo Ryu; F. Schreuder; J. Schumacher; W. Vandelli; J. C. Vermeulen; Weihao Wu; Jinlong Zhang
For new detector and trigger systems to be installed in the ATLAS experiment after LHC Run 2, a new approach will be followed for Front-End electronics interfacing. The FELIX (Front-End LInk eXchange) system will function as gateway connecting: on one side to detector and trigger electronics links, as well as providing timing and trigger information; and on the other side a commodity switched network built using standard technology (either Ethernet or Infiniband). The new approach is described in this paper, and results achieved so far are presented.
Journal of Physics: Conference Series | 2011
M L Valsan; M. Dobson; G. Lehmann Miotto; D A Scannicchio; S. Schlenker; V Filimonov; V Khomoutnikov; I Dumitru; A. S. Zaytsev; A. A. Korol; A. Bogdantchikov; G. Avolio; C Caramarcu; S. Ballestrero; G. L. Darlea; M. Twomey; F. Bujor
The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The RBAC implementation uses a directory service based on Lightweight Directory Access Protocol to store the users (~3000), roles (~320), groups (~80) and access policies. The information is kept in sync with various other databases and directory services: human resources, central CERN IT, CERN Active Directory and the Access Control Database used by DCS. The paper concludes with a detailed description of the integration across all areas of the system.
Journal of Instrumentation | 2014
Pietro Maoddi; A. Mapelli; P Bagiacchi; B. Gorini; M. Haguenauer; G. Lehmann Miotto; R. Murillo Garcia; F Safai Tehrani; S Veneziano; Philippe Renaud
Microfluidic channels obtained by SU-8 photolithography and filled with liquid scintillators were recently demonstrated to be an interesting technology for the implementation of novel particle detectors. The main advantages of this approach are the intrinsic radiation resistance resulting from the simple microfluidic circulation of the active medium and the possibility to manufacture devices with high spatial resolution and low material budget using microfabrication techniques. Here we explore a different technological implementation of this concept, reporting on scintillating detectors based on silicon microfluidic channels. A process for manufacturing microfluidic devices on silicon substrates, featuring microchannel arrays suitable for light guiding, was developed. Such process can be in principle combined with standard CMOS processing and lead to a tight integration with the readout photodetectors and electronics in the future. Several devices were manufactured, featuring microchannel geometries differing in depth, width and pitch. A preliminary characterization of the prototypes was performed by means of a photomultiplier tube coupled to the microchannel ends, in order to detect the scintillation light produced upon irradiation with beta particles from a 90Sr source. The photoelectron spectra thus obtained were fitted with the expected output function in order to extract the light yield.
Journal of Physics: Conference Series | 2012
Aaron Harwood; G. Lehmann Miotto; L. Magnoni; W. Vandelli; D Savu
This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web interface. The Level 2 is designed to present the data in a similar style and aesthetic, despite the different data sources. Pages can be constructed, edited and personalized by users to suit the specific data being shown. Pages can show a collection of graphs displaying data potentially coming from multiple sources. The project as a whole has a great amount of scope thanks to the uniform approach chosen for exposing data, and the flexibility of the Level 2 in presenting results. The paper will describe in detail the design and implementation of this new tool. In particular we will go through the project architecture, the implementation choices and the examples of usage of the system in place within the ATLAS TDAQ infrastructure.
Journal of Physics: Conference Series | 2012
A. Corso Radu; G. Lehmann Miotto; L. Magnoni
A large experiment like ATLAS at LHC (CERN), with over three thousand members and a shift crew of 15 people running the experiment 24/7, needs an easy and reliable tool to gather all the information concerning the experiment development, installation, deployment and exploitation over its lifetime. With the increasing number of users and the accumulation of stored information since the experiment start-up, the electronic logbook actually in use, ATLOG, started to show its limitations in terms of speed and usability. Its monolithic architecture makes the maintenance and implementation of new functionality a hard-to-almost-impossible process. A new tool ELisA has been developed to replace the existing ATLOG. It is based on modern web technologies: the Spring framework using a Model-View-Controller architecture was chosen, thus helping building flexible and easy to maintain applications. The new tool implements all features of the old electronic logbook with increased performance and better graphics: it uses the same database back-end for portability reasons. In addition, several new requirements have been accommodated which could not be implemented in ATLOG. This paper describes the architecture, implementation and performance of ELisA, with particular emphasis on the choices that allowed having a scalable and very fast system and on the aspects that could be re-used in different contexts to build a similar application.
Journal of Physics: Conference Series | 2012
G. Avolio; A. Corso Radu; A. Kazarov; G. Lehmann Miotto; L. Magnoni
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of more than 20000 applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all the TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures. During data taking runs, streams of information messages sent or published by running applications are the main sources of knowledge about correctness of running operations. The huge flow of operational monitoring data produced is constantly monitored by experts in order to detect problems or misbehaviours. Given the scale of the system and the rates of data to be analyzed, the automation of the system functionality in the areas of operational monitoring, system verification, error detection and recovery is a strong requirement. To accomplish its objective, the Controls system includes some high-level components which are based on advanced software technologies, namely the rule-based Expert System and the Complex Event Processing engines. The chosen techniques allow to formalize, store and reuse the knowledge of experts and thus to assist the shifters in the ATLAS control room during the data-taking activities.
Journal of Physics: Conference Series | 2011
G. Avolio; M. Caprini; G. Lehmann Miotto
The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition system to gather and select particle collision data at unprecedented energy and rates. The main interaction point between the operator in charge of the data taking and the Trigger and Data Acquisition (TDAQ) system is the Integrated Graphical User Interface (IGUI). The tasks of the IGUI can be coarsely grouped into three categories: system status monitoring, control and configuration. Status monitoring implies the presentation of the global status of the TDAQ system and of the ATLAS run, as well as the visualization of errors and other messages generated by the system; Control includes the functionality to interact with the TDAQ Run Control and Expert System; Configuration implies the possibility to give the current status and modify some parameters of the TDAQ system configuration. This paper describes the IGUI design and implementation. Particular emphasis will be given to the design choices taken to address the main performance and functionality requirements.