S. Kolos
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by S. Kolos.
IEEE Transactions on Nuclear Science | 2004
I. Alexandrov; A. Amorim; E. Badescu; M. Barczyk; D. Burckhart-Chromek; M. Caprini; J.D.S. Conceicao; J. Flammer; M. Dobson; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Klose; D. Liko; J. G. R. Lima; Levi Lúcio; L. Mapelli; M. Mineev; Luis G. Pedro; Y. F. Ryabov; I. Soloviev; H. Wolters
The Online Software is the global system software of the ATLAS data acquisition (DAQ) system, responsible for the configuration, control and information sharing of the ATLAS DAQ System. A test beam facility offers the ATLAS detectors the possibility to study important performance aspects as well as to proceed on the way to the final ATLAS DAQ system. Last year, three subdetectors of ATLAS-separately and combined-were successfully using the Online Software for the control of their datataking. In this paper, we describe the different components of the Online Software together with their usage at the ATLAS test beam.
Computer Physics Communications | 1998
G. Ambrosini; D. Burckhart; M. Caprini; M. Cobal; P.-Y. Duval; F. Etienne; Roberto Ferrari; David Francis; R. W. L. Jones; M. Joos; S. Kolos; A. Lacourt; A. Le Van Suu; A. Mailov; L. Mapelli; M. Michelotto; G. Mornacchi; R. Nacasch; M. Niculescu; K. Nurdan; C. Ottavi; A. Patel; Frédéric Pennerath; J. Petersen; G. Polesello; D. Prigent; Z. Qian; J. Rochez; F. Scuri; M. Skiadelli
Abstract A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition and Event Filter prototype, based on the functional architecture described in the ATLAS Technical Proposal. The prototype consists of a full “vertical” slice of the ATLAS Data Acquisition and Event Filter architecture, including all the hardware and software elements of the data flow, its control and monitoring as well as all the elements of a complete on-line system. This paper outlines the project, its goals, structure, schedule and current status and describes details of the system architecture and its components.
IEEE Transactions on Nuclear Science | 2007
W. Vandelli; P. Adragna; D. Burckhart; M. Bosman; M. Caprini; A. Corso-Radu; M. J. Costa; M. Della Pietra; J. Von Der Schmitt; A. Dotti; I. Eschrich; M. L. Ferrer; R Ferrari; Gabriella Gaudio; Haleh Khani Hadavand; S. J. Hillier; M. Hauschild; B. Kehoe; S. Kolos; K. Kordas; R. A. McPherson; M. Mineev; C. Padilla; T. Pauly; I. Riu; C. Roda; D. Salvatore; Ingo Scholtes; S. Sushkov; H. G. Wilkens
ATLAS is one of the four experiments under construction along the Large Hadron Collider (LHC) ring at CERN. The LHC will produce interactions at a center-of-mass energy equal to radics = 14 TeV with a frequency of 40 MHz. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common, scalable, distributed monitoring framework, which can be tuned for optimal use by different ATLAS sub-detectors at the various levels of the ATLAS data flow. This paper presents the architecture of this monitoring software framework and describes its current implementation, which has already been used at the ATLAS beam test activity in 2004. Preliminary performance results, obtained on a computer cluster consisting of 700 nodes, will also be presented, showing that the performance of the current implementation is within the range of the final ATLAS requirements.
ieee npss real time conference | 1999
R. W. L. Jones; S. Kolos; L. Mapelli; Y. F. Ryabov
This paper presents the experience of using Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORB-based system locate objects that they intend to use. In our project, special conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, which incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-user interfaces have been implemented using the Java language that communicate with C++ servers via CORBA/ILU. To support such interfaces, a second implementation of IPC in Java has been developed. The design and implementation of such connections are described. An alternative CORBA implementation, ORBacus, has been evaluated and compared with ILU.
IEEE Transactions on Nuclear Science | 1998
A. Amorim; E. Badescu; D. Burckhart; M. Caprini; L. Cohen; P.-Y. Duval; B. Jones; S. Kolos; L. Mapelli; M. Michelotto; R. Nacasch; Z. Qian; A. Radu; Y. F. Ryabov; I. Soloviev; S. Wheeler; T. Wildish; H. Wolters
This paper presents the experience of using a CORBA-based communication package for inter-component communication of control and status information in the ATLAS prototype DAQ project. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA-based communication between DAQ components in a local area network (LAN) of heterogeneous computers. The selection of the CORBA standard and the ILU implementation are judged against the requirements of the DAQ system. An overview of ILU is included. Several components of the DAQ system have been designed and implemented using CORBA/ILU for which the development procedure and environment are described.
Journal of Physics: Conference Series | 2008
S. Kolos; A. Corso-Radu; Haleh Khani Hadavand; M. Hauschild; R Kehoe
Data Quality Monitoring (DQM) is an important and integral part of the data taking and data reconstruction of HEP experiments. In an online environment, DQM provides the shift crew with live information beyond basic monitoring. This is used to overcome problems promptly and help avoid taking faulty data. During the off-line reconstruction DQM is used for more complex analysis of physics quantities and its results are used to assess the quality of the reconstructed data. The Data Quality Monitoring software Framework (DQMF) which has been provided for the ATLAS experiment performs analysis of monitoring data through user defined algorithms and relays the summary of the analysis results to the configurable Data Quality output stream. From this stream the results can be stored to a database, displayed on a GUI, or used to make some other relevant actions with respect to the operational environment i.e. sending alarms, stopping the run. This paper describes the implementation of the DQMF and discusses experience from usage and performance of the DQMF during ATLAS commissioning.
ieee-npss real-time conference | 2005
S. Gadomski; M. Abolins; I. Alexandrov; A. Amorim; C. Padilla-Aranda; E. Badescu; N. Barros; H. P. Beck; R. E. Blair; D. Burckhart-Chromek; M. Caprini; M. Ciobotaru; P. Conde-Muíño; A. Corso-Radu; M. Diaz-Gomez; R. Dobinson; M. Dobson; Roberto Ferrari; M. L. Ferrer; David Francis; S. Gameiro; B. Gorini; M. Gruwe; S. Haas; C. Haeberli; R. Hauser; R. E. Hughes-Jones; M. Joos; A. Kazarov; D. Klose
The ATLAS collaboration at CERN operated a combined test beam (CTB) from May until November 2004. The prototype of ATLAS data acquisition system (DAQ) was used to integrate other subsystems into a common CTB setup. Data were collected synchronously from all the ATLAS detectors, which represented nine different detector technologies. Electronics and software of the first level trigger were used to trigger the setup. Event selection algorithms of the high level trigger were integrated with the system and were tested with real detector data. A possibility to operate a remote event filter farm synchronized with ATLAS TDAQ was also tested. Event data, as well as detectors conditions data were made available for offline analysis
ieee nuclear science symposium | 2001
I. Alexandrov; A. Amorim; E. Badescu; D. Burckhart-Chromek; M. Caprini; M. Dobson; P.-Y. Duval; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Liko; Levi Lúcio; L. Mapelli; M. Mineev; L. Moneta; M. Nassiakou; Luis G. Pedro; A. Ribeiro; V. Roumiantsev; Y. F. Ryabov; D. Schweiger; I. Soloviev; H. Wolters
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
Journal of Physics: Conference Series | 2012
S. Kolos; G. Boutsioukis; R. Hauser
The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information item in IS has an associated URL which can be used to access that item online via HTTP protocol. This functionality is being used by many online monitoring applications which can run in a WEB browser, providing real-time monitoring information about the ATLAS experiment over the globe. This paper describes the design and implementation of the IS and presents performance results which have been taken in the ATLAS operational environment.
Journal of Physics: Conference Series | 2012
Alexandru Sicoe; Giovanna Lehmann Miotto; L. Magnoni; S. Kolos; Igor Soloviev
This paper describes P-BEAST, a highly scalable, highly available and durable system for archiving monitoring information of the trigger and data acquisition (TDAQ) system of the ATLAS experiment at CERN. Currently this consists of 20,000 applications running on 2,400 interconnected computers but it is foreseen to grow further in the near future. P-BEAST stores considerable amounts of monitoring information which would otherwise be lost. Making this data accessible, facilitates long term analysis and faster debugging. The novelty of this research consists of using a modern key-value storage technology (Cassandra) to satisfy the massive time series data rates, flexibility and scalability requirements entailed by the project. The loose schema allows the stored data to evolve seamlessly with the information flowing within the Information Service. An architectural overview of P-BEAST is presented alongside a discussion about the technologies considered as candidates for storing the data. The arguments which ultimately lead to choosing Cassandra are explained. Measurements taken during operation in production environment illustrate the data volume absorbed by the system and techniques for reducing the required Cassandra storage space overhead.