I. Soloviev
Petersburg Nuclear Physics Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by I. Soloviev.
IEEE Transactions on Nuclear Science | 1998
R. W. L. Jones; L. Mapelli; Yu. Ryabov; I. Soloviev
The OKS (Object Kernel Support) is a library to support a simple, active persistent in-memory object manager. It is suitable for applications which need to create persistent structured information with fast access but do not require full database functionality. It can be used as the frame of configuration databases and real-time object managers for Data Acquisition and Detector Control Systems in such fields as setup, diagnostics and general configuration description. OKS is based on an object model that supports objects, classes, associations, methods, inheritance, polymorphism, object identifiers, composite objects, integrity constraints, schema evolution, data migration and active notification. OKS stores the class definitions and their instances in portable ASCII files. It provides query facilities, including indices support. The OKS has a C++ API (Application Program Interface) and includes Motif based GUI applications to design class schema and to manipulate objects. OKS has been developed on top of the Rogue Wave Tools h++ C++ class library.
IEEE Transactions on Nuclear Science | 2004
I. Alexandrov; A. Amorim; E. Badescu; M. Barczyk; D. Burckhart-Chromek; M. Caprini; J.D.S. Conceicao; J. Flammer; M. Dobson; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Klose; D. Liko; J. G. R. Lima; Levi Lúcio; L. Mapelli; M. Mineev; Luis G. Pedro; Y. F. Ryabov; I. Soloviev; H. Wolters
The Online Software is the global system software of the ATLAS data acquisition (DAQ) system, responsible for the configuration, control and information sharing of the ATLAS DAQ System. A test beam facility offers the ATLAS detectors the possibility to study important performance aspects as well as to proceed on the way to the final ATLAS DAQ system. Last year, three subdetectors of ATLAS-separately and combined-were successfully using the Online Software for the control of their datataking. In this paper, we describe the different components of the Online Software together with their usage at the ATLAS test beam.
Computer Physics Communications | 1998
G. Ambrosini; D. Burckhart; M. Caprini; M. Cobal; P.-Y. Duval; F. Etienne; Roberto Ferrari; David Francis; R. W. L. Jones; M. Joos; S. Kolos; A. Lacourt; A. Le Van Suu; A. Mailov; L. Mapelli; M. Michelotto; G. Mornacchi; R. Nacasch; M. Niculescu; K. Nurdan; C. Ottavi; A. Patel; Frédéric Pennerath; J. Petersen; G. Polesello; D. Prigent; Z. Qian; J. Rochez; F. Scuri; M. Skiadelli
Abstract A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition and Event Filter prototype, based on the functional architecture described in the ATLAS Technical Proposal. The prototype consists of a full “vertical” slice of the ATLAS Data Acquisition and Event Filter architecture, including all the hardware and software elements of the data flow, its control and monitoring as well as all the elements of a complete on-line system. This paper outlines the project, its goals, structure, schedule and current status and describes details of the system architecture and its components.
IEEE Transactions on Nuclear Science | 1998
A. Amorim; E. Badescu; D. Burckhart; M. Caprini; L. Cohen; P.-Y. Duval; B. Jones; S. Kolos; L. Mapelli; M. Michelotto; R. Nacasch; Z. Qian; A. Radu; Y. F. Ryabov; I. Soloviev; S. Wheeler; T. Wildish; H. Wolters
This paper presents the experience of using a CORBA-based communication package for inter-component communication of control and status information in the ATLAS prototype DAQ project. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA-based communication between DAQ components in a local area network (LAN) of heterogeneous computers. The selection of the CORBA standard and the ILU implementation are judged against the requirements of the DAQ system. An overview of ILU is included. Several components of the DAQ system have been designed and implemented using CORBA/ILU for which the development procedure and environment are described.
Archive | 2004
D. Liko; I. Soloviev; R. W. L. Jones; S. Kolos; J. Flammer; Yu. Ryabov; A. Kazarov; M. Mineev; L. Mapelli; I. Alexandrov; S Korobov; D. Burckhart-Chromek; Kotov; M. Caprini; E. Badescu; N Fiuza de Barros; A. Amorim; D. Klose; Luis G. Pedro; M. Dobson
The unprecedented size and complexity of the ATLAS TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run- control, e.g. starting and stopping the data taking, to error handling and fault tolerance. It also includes initialization and verification of the overall system. Following the traditional approach a hierarchical system of customizable controllers has been proposed. For the final system all functionality will be therefore available in a distributed manner, with the possibility of local customization. After a technology survey the open source expert system CLIPS has been chosen as a basis for the implementation of the supervision and the verification system. The CLIPS interpreter has been extended to provide a general control framework. Other ATLAS Online software components have been integrated as plug-ins and provide the mechanism for configuration and communication. Several components have been implemented sharing this technology. The dynamic behavior of the individual component is fully described by the rules, while the framework is based on a common implementation. During this year these components have been the subject of scalability tests up to the full system size. Encouraging results are presented and validate the technology choice.
ieee-npss real-time conference | 2005
S. Gadomski; M. Abolins; I. Alexandrov; A. Amorim; C. Padilla-Aranda; E. Badescu; N. Barros; H. P. Beck; R. E. Blair; D. Burckhart-Chromek; M. Caprini; M. Ciobotaru; P. Conde-Muíño; A. Corso-Radu; M. Diaz-Gomez; R. Dobinson; M. Dobson; Roberto Ferrari; M. L. Ferrer; David Francis; S. Gameiro; B. Gorini; M. Gruwe; S. Haas; C. Haeberli; R. Hauser; R. E. Hughes-Jones; M. Joos; A. Kazarov; D. Klose
The ATLAS collaboration at CERN operated a combined test beam (CTB) from May until November 2004. The prototype of ATLAS data acquisition system (DAQ) was used to integrate other subsystems into a common CTB setup. Data were collected synchronously from all the ATLAS detectors, which represented nine different detector technologies. Electronics and software of the first level trigger were used to trigger the setup. Event selection algorithms of the high level trigger were integrated with the system and were tested with real detector data. A possibility to operate a remote event filter farm synchronized with ATLAS TDAQ was also tested. Event data, as well as detectors conditions data were made available for offline analysis
Computer Physics Communications | 1998
D. Burckhart; R. W. L. Jones; L. Mapelli; M. Michelotto; A. Patel; M. Skiadelli; I. Soloviev; P.-Y. Duval; A. Le Van Suu; R. Nacasch; Z. Qian; F. Touchard; M. Caprini; S. Kolos; K. Nurdan; S. Wheeler
Abstract The ATLAS collaboration has defined a set of user requirements for the back-end software subsystem within the context of the data acquisition and event filter prototype “−1” project. Based on these requirements, a number of evaluations have been performed on candidate technologies and techniques in the areas of configuration data storage (Objectivity ODBMS; Rogue Wave Tools.h++ for C++ object persistence), inter-process communication (Corba; MPI), dynamic object behaviour (Harel StateChart generator), graphical user interfaces (cross-platform GUI builder; Java AWT) and software integration (ACE operating-system interface). This paper describes the important requirements which lead to the selection of these technologies, the results obtained from the evaluations and how we intend to apply them to the design and implementation phases of the project.
ieee-npss real-time conference | 2014
I. Soloviev; Alexandru Sicoe
The Trigger and Data Acquisition (TDAQ) and detector systems of the ATLAS experiment deploy more than 3000 computers, running more than 15000 concurrent processes, to perform the selection, recording and monitoring of the proton collisions data in ATLAS. Most of these processes produce and share operational monitoring data used for inter-process communication and analysis of the systems. Few of these data are archived by dedicated applications into conditions and histogram databases. The rest of the data remained transient and lost at the end of a data taking session. To save these data for later, offline analysis of the quality of data taking and to help investigating the behavior of the system by experts, the first prototype of a new Persistent Back-End for the Atlas Information System of TDAQ (P-BEAST) was developed and deployed in the second half of 2012. The modern, distributed, and Java-based Cassandra database has been used as the storage technology and the CERN EOS for long-term storage. This paper provides details of the architecture and the experience with such a system during the last months of the first LHC Run. It explains why that very promising prototype has failed and how it was reimplemented using the Google C++ protobuf technology. It finally presents the new architecture and details of the service, which will be used during second LHC Run.
ieee-npss real-time conference | 2007
J. Almeida; M. Dobson; A. Kazarov; Giovanna Lehmann Miotto; J. Sloper; I. Soloviev; Ret Torres
This paper describes challenging requirements on the configuration service for the ATLAS experiment at CERN. It presents the status of the implementation and testing one year before the start of data taking, providing details of: 1. the capabilities of the underlying OKS object manager to store and to archive configuration descriptions, its user and programming interfaces; 2. the organization of configuration descriptions for different types of data taking runs and combinations of participating sub-detectors; 3. the scalable architecture to support simultaneous access to the service by thousands of processes during the online configuration stage of ATLAS; 4. the experience with the usage of the configuration service during large scale tests, test beam, commissioning and technical runs. The paper also presents pro and contra of the chosen object-oriented implementation compared with solutions based on pure relational database technologies, and explains why after several years of usage we continue with our approach.
ieee-npss real-time conference | 2014
A. Kazarov; M. Caprini; S. Kolos; Giovanna Lehmann Miotto; I. Soloviev
The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, based on the publish-subscribe-notify communication pattern with content-based message filtering.During the ongoing LHC shutdown, MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs in the Java and C++ programming languages. The paper presents the design and the implementation details of MTS, as well as the results of performance and scalability tests executed on a computing farm with an amount of workers and working conditions which reproduced a realistic TDAQ environment during ATLAS operations.