Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. D. Ciobotaru is active.

Publication


Featured researches published by M. D. Ciobotaru.


ieee npss real time conference | 2004

The base-line DataFlow system of the ATLAS trigger and DAQ

H. Beck; M. Abolins; A. Dos Anjos; M. Barisonzi; M. Beretta; R. E. Blair; J. A. Bogaerts; H. Boterenbrood; D. Botterill; M. D. Ciobotaru; E.P. Cortezon; R. Cranfield; G. Crone; J. Dawson; R. Dobinson; Y. Ermoline; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; P. Golonka; B. Gorini; B. Green; M. Gruwe; S. Haas; C. Haeberli; Y. Hasegawa; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones

The base-line design and implementation of the ATLAS DAQ DataFlow system is described. The main components of the DataFlow system, their interactions, bandwidths, and rates are discussed and performance measurements on a 10% scale prototype for the final ATLAS TDAQ DataFlow system are presented. This prototype is a combination of custom design components and of multithreaded software applications implemented in C++ and running in a Linux environment on commercially available PCs interconnected by a fully switched gigabit Ethernet network.


ieee-npss real-time conference | 2005

GETB, a gigabit Ethernet application platform: its use in the ATLAS TDAQ network

M. D. Ciobotaru; S. Stancu; M.J. LeVine; B. Martin

The ATLAS Trigger and Data Acquisition (TDAQ) system will employ a large high-speed Ethernet-based network, comprising several hundred nodes. Ethernet switches handle all the data transfers, their performance being essential for the success of the experiment. We designed and implemented a system, the Gigabit Ethernet Testbed (GETB), which can be used to assess the performance of network devices. This paper introduces the architecture and the implementation of the GETB platform, as well as its applications. These include the GETB Network Tester and the ATLAS Read-Out Buffer Emulator. The features of the system are described and sample results are presented.


IEEE Transactions on Nuclear Science | 2006

GETB-a gigabit ethernet application platform: its use in the ATLAS TDAQ network

M. D. Ciobotaru; Stefan Stancu; M.J. LeVine; B. Martin

The ATLAS trigger and data acquisition system (TDAQ) employs a large high-speed Ethernet-based network, comprising several hundred nodes. Ethernet switches handle all the data transfers, their performance being essential for the success of the experiment. We designed and implemented a system, the GETB (gigabit Ethernet testbed), which can be used to assess the performance of network devices. This article introduces the architecture and the implementation of the GETB platform, as well as its applications. These include the GETB network tester and the ATLAS read-out buffer emulator. The features of the system are described and sample results are presented


IEEE Transactions on Nuclear Science | 2008

Performance of the Final Event Builder for the ATLAS Experiment

H. P. Beck; M. Abolins; A. Battaglia; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft; S. Klous

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment by a three level trigger system, which reduces the initial bunch crossing rate of 40 MHz at its first two trigger levels (LVL1+LVL2) to ~3 kHz. At this rate the Event-Builder collects the data from all Read-Out system PCs (ROSs) and provides fully assembled events to the the Event-Filter (EF), which is the third level trigger, to achieve a further rate reduction to ~200 Hz for permanent storage. The Event-Builder is based on a farm of 0 (100) PCs, interconnected via Gigabit Ethernet to 0 (150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs and one third of the Event-Builder PCs are already installed and commissioned. Performance measurements have been exercised on this initial system, which show promising results that the required final data rates and bandwidth for the ATLAS event builder are in reach.


Archive | 2004

Performance of the ATLAS DAQ DataFlow system

G. Unel; E. Pasqualucci; M. Gruwe; H. Beck; H. Zobernig; R. Ferrari; M. Abolins; D. Prigent; K. Nakayoshi; Pérez-Réale; R. Hauser; G. Crone; A. J. Lankford; A. Kaczmarska; D. Botterill; Fred Wickens; Y. Nagasaka; L. Tremblet; R. Spiwoks; E Palencia-Cortezon; S. Gameiro; P. Golonka; R. E. Blair; G. Kieft; J. L. Schlereth; J. Petersen; J. A. Bogaerts; A. Misiejuk; Y. Hasegawa; M. Le Vine

The baseline DAQ architecture of the ATLAS Experiment at LHC is introduced and its present implementation and the performance of the DAQ components as measured in a laboratory environment are summarized. It will be shown that the discrete event simulation model of the DAQ system, tuned using these measurements, does predict the behaviour of the prototype configurations well, after which, predictions for the final ATLAS system are presented. With the currently available hardware and software, a system using ~140 ROSs with 3GHz single cpu, ~100 SFIs with dual 2.4 GHz cpu and ~500 L2PUs with dual 3.06 GHz cpu can achieve the dataflow for 100 kHz Level 1 rate, with 97% reduction at Level 2 and 3 kHz event building rate. ATLAS DATAFLOW SYSTEM The 40 MHz collision rate at the LHC produces about 25 interactions per bunch crossing, resulting in terabytes of data per second, which has to be handled by the detector electronics and the trigger and DAQ system [1]. A Level1 (L1) trigger system based on custom electronics will reduce the event rate to 75 kHz (upgradeable to 100 kHz – this paper uses the more demanding 100 kHz). The ________________________________________ #. Also affiliated with University of California at Irvine, Irvine, USA *. On leave from Henryk Niewodniczanski Institute of Nucl. Physics, Cracow +. Presently at CERN, Geneva, Switzerland 91 DAQ system is responsible for: the readout of the detector specific electronics via 1630 point to point read-out links (ROL) hosted by Readout Subsystems (ROS), the collection and provision of “Region of Interest data” (ROI) to the Level2 (L2) trigger, the building of events accepted by the L2 trigger and their subsequent input to the Event Filter (EF) system where they are subject to further selection criteria. The DAQ also provides the functionality for the configuration, control, information exchange and monitoring of the whole ATLAS detector readout [2]. The applications in the DAQ software dealing with the flow of event and monitoring data as well as the trigger information are called “DataFlow” applications. The DataFlow applications up to the EF input and their interactions are shown in Figure 1. Figure 1 ATLAS DAQ-DataFlow applications and their interactions (up to the EventFilter) SFI L2PU L2SV DFM pROS ROS ROI data


ieee nuclear science symposium | 2007

The ATLAS event builder

W. Vandelli; M. Abolins; A. Battaglia; H. Beck; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft


IEEE Transactions on Nuclear Science | 2006

Deployment and Use of the ATLAS DAQ in the Combined Test Beam

S. Gadomski; M. Abolins; I. Alexandrov; A. Amorim; C. Padilla-Aranda; E. Badescu; N. Barros; H. Beck; R. E. Blair; D. Burckhart-Chromek; M. Caprini; M. D. Ciobotaru; P. Conde-Muíño; A. Corso-Radu; M. Diaz-Gomez; R. Dobinson; M. Dobson; R. Ferrari; M. L. Ferrer; D. Francis; S. Gameiro; B. Gorini; M. Gruwe; S. Haas; C. Haeberli; R. Hauser; R. E. Hughes-Jones; M. Joos; A. Kazarov; D. Klose


IEEE Transactions on Nuclear Science | 2006

ATLAS DataFlow: the read-out subsystem, results from trigger and data-acquisition system testbed studies and from modeling

J. C. Vermeulen; M. Abolins; I. Alexandrov; A. Amorim; A. Dos Anjos; E. Badescu; N. Barros; H. Beck; R. E. Blair; D. Burckhart-Chromek; M. Caprini; M. D. Ciobotaru; A. Corso-Radu; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; M. Gruwe; S. Haas; W. Haberichter

Collaboration


Dive into the M. D. Ciobotaru's collaboration.

Top Co-Authors

Avatar

M. Abolins

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

R. E. Blair

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

H. Beck

Heidelberg University

View shared research outputs
Top Co-Authors

Avatar

R. Dobinson

California State University

View shared research outputs
Top Co-Authors

Avatar

R. Ferrari

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

R. Hauser

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge