Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where R. Hart is active.

Publication


Featured researches published by R. Hart.


IEEE Transactions on Nuclear Science | 2004

Online software for the ATLAS test beam data acquisition system

I. Alexandrov; A. Amorim; E. Badescu; M. Barczyk; D. Burckhart-Chromek; M. Caprini; J.D.S. Conceicao; J. Flammer; M. Dobson; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Klose; D. Liko; J. G. R. Lima; Levi Lúcio; L. Mapelli; M. Mineev; Luis G. Pedro; Y. F. Ryabov; I. Soloviev; H. Wolters

The Online Software is the global system software of the ATLAS data acquisition (DAQ) system, responsible for the configuration, control and information sharing of the ATLAS DAQ System. A test beam facility offers the ATLAS detectors the possibility to study important performance aspects as well as to proceed on the way to the final ATLAS DAQ system. Last year, three subdetectors of ATLAS-separately and combined-were successfully using the Online Software for the control of their datataking. In this paper, we describe the different components of the Online Software together with their usage at the ATLAS test beam.


ieee nuclear science symposium | 2001

Process management inside ATLAS DAQ

I. Alexandrov; A. Amorim; E. Badescu; D. Burckhart-Chromek; M. Caprini; M. Dobson; P.-Y. Duval; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Liko; Levi Lúcio; L. Mapelli; M. Mineev; L. Moneta; M. Nassiakou; Luis G. Pedro; A. Ribeiro; V. Roumiantsev; Y. F. Ryabov; D. Schweiger; I. Soloviev; H. Wolters

The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.


Journal of Physics: Conference Series | 2012

The ATLAS Detector Control System

K. Lantzsch; S. Arfaoui; S. Franz; O. Gutzwiller; S. Schlenker; C A Tsarouchas; B. Mindur; J. Hartert; S. Zimmermann; A. A. Talyshev; D. Oliveira Damazio; A. Poblaguev; H. M. Braun; D. Hirschbuehl; S. Kersten; T. A. Martin; P. D. Thompson; D. Caforio; C. Sbarra; D. Hoffmann; S. Nemecek; A. Robichaud-Veronneau; B. M. Wynne; E. Banas; Z. Hajduk; J. Olszowska; E. Stanecka; M. Bindi; A. Polini; M. Deliyergiyev

The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.


very large data bases | 2002

OBK: an online high energy physics' meta-data repository

I. Alexandrov; A. Amorim; E. Badescu; M. Barczyk; D. Burckhart-Chromek; M. Caprini; M. Dobson; J. Flammer; R. Hart; R. Jones; A. Kazarov; S. Kolos; V. Kotov; D. Liko; Levi Lúcio

ATLAS will be one of the four detectors for the LHC (Large Hadron Collider) particle accelerator currently being built at CERN, Geneva. The project is expected to start production in 2006 and during its lifetime (15-20 years) to generate roughly one petabyte per year of particle physics data. This vast amount of information will require several meta-data repositories which will ease the manipulation and understanding of physics data by the final users (physicists doing analysis). Metadata repositories and tools at ATLAS may address such problems as the logical organization of the physics data according to data taking sessions, errors and faults during data gathering, data quality or terciary storage meta-information. n nThe OBK (Online Book-Keeper) is a component of ATLAS Online Software - the system which provides configuration, control and monitoring services to the DAQ (Data AQquisition system). In this paper we will explain the role of the OBK as one of the main collectors and managers of meta-data produced online, how that data is stored and the interfaces that are provided to access it - merging the physics data with the collected metadata will play an essential role for future analysis and interpretion of the physics events observed at ATLAS. We also provide an historical background to the OBK by analysing the several prototypes implemented in the context of our software development process and the results and experience obtained with the various DBMS technologies used.


ieee npss real time conference | 1999

Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

I. Alexandrov; A. Amorim; E. Badescu; D. Burckhart; M. Caprini; L. Cohen; P.-Y. Duval; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Laugier; L. Mapelli; L. Moneta; Z. Qian; A. Radu; C.A. Ribeiro; V. Roumiantsev; Y. F. Ryabov; D. Schweiger; I. Soloviev

The DAQ group of the future ATLAS experiment has developed a prototype system based on the Trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back-end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status.


arXiv: High Energy Physics - Experiment | 2003

Online Monitoring software framework in the ATLAS experiment

S. Kolos; I. Alexandrov; A. Amorim; M. Barczyk; E. Badescu; D. Burckhart-Chromek; M. Caprini; J. Da Silva Conceicao; M. Dobson; J. Flammer; R. Hart; R. W. L. Jones; A. Kazarov; D. Klose; V. M. Kotov; D. Liko; J. G. R. Lima; Levi Lúcio; L. Mapelli; M. Mineev; Luis G. Pedro; Yu. Ryabov; I. Soloviev; H. Wolters


arXiv: High Energy Physics - Experiment | 2003

Verification and diagnostics framework in ATLAS trigger / DAQ

M. Barczyk; A. Kazarov; M. Mineev; D. Klose; V. M. Kotov; J. Flammer; A. Amorim; D. Liko; I. Alexandrov; S. Kolos; E. Badescu; R. Hart; J. Pedro; M. Caprini; I. Soloviev; H. Wolters; J. G. R. Lima; M. Dobson; J. Da Silva Conceicao; Levi Lúcio; D. Burckhart-Chromek; L. Mapelli; Yu. Ryabov; R. W. L. Jones


International Conference On Computing In High Energy Physics And Nuclear Physics CHEP 2000 | 2000

Impact of software review and inspection

I. Aleksandrov; V. Amaral; A. Amorim; E. Badescu; D. Burckhart; M. Caprini; L. Cohen; P.Y. Duval; R. Hart; R. W. L. Jones; A. Kazarov; S. Kolos; V. M. Kotov; D. Laugier; L. Mapelli; L. Moneta; Z. Qian; C. Ribeiro; V. Rumyantsev; Yu. Ryabov; D. Schweiger; I. Solovev


arXiv: Databases | 2003

An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS

M. Barczyk; D. Burckhart-Chromek; M. Caprini; J. Da Silva Conceicao; M. Dobson; J. Flammer; R. Jones; A. Kazarov; S. Kolos; D. Liko; L. Mapelli; I. Soloviev; R. Hart; A. Amorim; D. Klose; J. G. R. Lima; Levi Lúcio; Luis G. Pedro; H. Wolters; E. Badescu; I. Alexandrov; V. M. Kotov; Mikhail Mineev; Yu. Ryabov

Collaboration


Dive into the R. Hart's collaboration.

Top Co-Authors

Avatar

A. Kazarov

Petersburg Nuclear Physics Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Badescu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

M. Caprini

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

I. Alexandrov

Joint Institute for Nuclear Research

View shared research outputs
Top Co-Authors

Avatar

V. M. Kotov

Joint Institute for Nuclear Research

View shared research outputs
Top Co-Authors

Avatar

I. Soloviev

Petersburg Nuclear Physics Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge