Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. Sushkov is active.

Publication


Featured researches published by S. Sushkov.


IEEE Transactions on Nuclear Science | 2007

Strategies and Tools for ATLAS Online Monitoring

W. Vandelli; P. Adragna; D. Burckhart; M. Bosman; M. Caprini; A. Corso-Radu; M. J. Costa; M. Della Pietra; J. Von Der Schmitt; A. Dotti; I. Eschrich; M. L. Ferrer; R Ferrari; Gabriella Gaudio; Haleh Khani Hadavand; S. J. Hillier; M. Hauschild; B. Kehoe; S. Kolos; K. Kordas; R. A. McPherson; M. Mineev; C. Padilla; T. Pauly; I. Riu; C. Roda; D. Salvatore; Ingo Scholtes; S. Sushkov; H. G. Wilkens

ATLAS is one of the four experiments under construction along the Large Hadron Collider (LHC) ring at CERN. The LHC will produce interactions at a center-of-mass energy equal to radics = 14 TeV with a frequency of 40 MHz. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common, scalable, distributed monitoring framework, which can be tuned for optimal use by different ATLAS sub-detectors at the various levels of the ATLAS data flow. This paper presents the architecture of this monitoring software framework and describes its current implementation, which has already been used at the ATLAS beam test activity in 2004. Preliminary performance results, obtained on a computer cluster consisting of 700 nodes, will also be presented, showing that the performance of the current implementation is within the range of the final ATLAS requirements.


ieee-npss real-time conference | 2007

Performance of the final Event Builder for the ATLAS Experiment

H. P. Beck; M. Abolins; A. Battaglia; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft; S. Klous

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three level trigger system, which reduces the initial bunch crossing rate of 40 MHz at its first two trigger levels (LVL1+LVL2) to ~3 kHz. At this rate the Event-Builder collects the data from all read-out system PCs (ROSs) and provides fully assembled events to the the event-filter (EF), which is the third level trigger, to achieve a further rate reduction to ~ 200 Hz for permanent storage. The event-builder is based on a farm of O(100) PCs, interconnected via gigabit Ethernet to O(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs and one third of the event-builder PCs are already installed and commissioned. We report on performance tests on this initial system, which show promising results to reach the final data throughput required for the ATLAS experiment.


ieee nuclear science symposium | 2005

Overview of the high-level trigger electron and photon selection for the ATLAS experiment at the LHC

A.G. Mello; A. Dos Anjos; S.R. Armstrong; John Baines; C. Bee; M. Biglietti; J. A. Bogaerts; M. Bosman; B. Caron; P. Casado; G. Cataldi; D. Cavalli; G. Comune; P.C. Muino; G. Crone; D. Damazio; A. De Santo; M.D. Gomez; A. Di Mattia; N. Ellis; D. Emeliyanov; B. Epp; S. Falciano; H. Garitaonandia; Simon George; V. M. Ghete; R. Gonçalo; J. Haller; S. Kabana; A. Khomich

The ATLAS experiment is one of two general purpose experiments to start running at the Large Hadron Collider in 2007. The short bunch crossing period of 25 ns and the large background of soft-scattering events overlapped in each bunch crossing pose serious challenges that the ATLAS trigger must overcome in order to efficiently select interesting events. The ATLAS trigger consists of a hardware-based first-level trigger and of a software-based high-level trigger, which can be further divided into the second-level trigger and the event filter. This paper presents the current state of development of methods to be used in the high-level trigger to select events containing electrons or photons with high transverse momentum. The performance of these methods is presented, resulting from both simulation studies, timing measurements, and test beam studies.


IEEE Transactions on Nuclear Science | 2005

Design, deployment and functional tests of the online event filter for the ATLAS experiment at LHC

S.R. Armstrong; A. Dos Anjos; John Baines; C. P. Bee; M. Biglietti; J. A. Bogaerts; V. Boisvert; M. Bosman; B. Caron; P. Casado; G. Cataldi; D. Cavalli; M. Cervetto; G. Comune; Pc Muino; A. De Santo; M.D. Gomez; M. Dosil; N. Ellis; D. Emeliyanov; B. Epp; F. Etienne; S. Falciano; A. Farilla; Simon George; V. M. Ghete; S. Gonzalez; M. Grothe; S. Kabana; A. Khomich

The Event Filter (EF) selection stage is a fundamental component of the ATLAS Trigger and Data Acquisition architecture. Its primary function is the reduction of data flow and rate to values acceptable by the mass storage operations and by the subsequent offline data reconstruction and analysis steps. The computing instrument of the EF is organized as a set of independent subfarms, each connected to one output of the Event Builder (EB) switch fabric. Each subfarm comprises a number of processors analyzing several complete events in parallel. This paper describes the design of the ATLAS EF system, its deployment in the 2004 ATLAS combined test beam together with some examples of integrating selection and monitoring algorithms. Since the processing algorithms are not explicitly designed for EF but are adapted from the offline ones, special emphasis is reserved to system reliability and data security, in particular for the case of failures in the processing algorithms. Other key design elements have been system modularity and scalability. The EF shall be able to follow technology evolution and should allow for using additional processing resources possibly remotely located


Archive | 2004

Portable Gathering System for Monitoring and Online Calibration at ATLAS.

P. Conde-Muíño; C. Santamarina-Rios; A. Negri; J. Masik; Philip A. Pinto; S. George; S. Resconi; S. Tapprogge; Z. Qian; V. Vercesi; V. Pérez-Réale; M. Grothe; L. Luminari; John Baines; B. Caron; P. Werner; N. Panikashvili; R. Soluk; A. Di Mattia; A. Kootz; C. Sanchez; B. Venda-Pinto; F. Touchard; N. Nikitin; S. Gonzalez; E. Stefanidis; A. J. Lowe; M. Dosil; V. Boisvert; E. Thomas

During the runtime of any experiment, a central monitoring system that detects problems as soon as they appear has an essential role. In a large experiment, like ATLAS, the online data acquisition system is distributed across the nodes of large farms, each of them running several processes that analyse a fraction of the events. In this architecture, it is necessary to have a central process that collects all the monitoring data from the different nodes, produces full statistics histograms and analyses them. In this paper we present the design of such a system, called the gatherer. It allows to collect any monitoring object, such as histograms, from the farm nodes, from any process in the


ieee-npss real-time conference | 2005

Development and tests of the event filter for the ATLAS experiment

M. Bosman; C. Meessen; A. Negri; E. Segura; S. Sushkov; F. Touchard; S. Wheeler

The trigger and data acquisition (TDAQ) system of the ATLAS experiment comprises three stages of event selection. The event filter (EF) is the third level trigger and is software implemented. Its primary goal is the final selection of interesting events with reduction of the event rate down to ~200 Hz acceptable by the mass storage. The EF system is implemented as a set of independent commodity components sub-farms, each connected to the event builder subsystem to receive full events and on the other side to the sub-farm output nodes, where the selected events are forwarded to mass storage. A distinctive feature of the event filter is its ability to use the full event data for selection directly based on the offline reconstruction and analysis algorithms. Besides the main duties on event triggering and data transportation, the EF is also able to provide additional functionalities, like monitoring of the selected events and online calibration of the ATLAS detectors. Significant design improvements are currently under development to provide these additional functionalities to the EF system. The event filter was deployed and tested in data triggering at the ATLAS combined test beam at CERN in 2004. The EF is also subject to various tests on dedicated test-bed computer clusters, where both the EF functionalities and performance are studied. Special tests are being done when running the EF software components on the large-scale EF farms, with hundreds of computers. This paper describes the current activities and plans on development and testing of the ATLAS event filter system


IEEE Transactions on Nuclear Science | 2008

The ATLAS Event Builder

W. Vandelli; M. Abolins; A. Battaglia; H. P. Beck; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three-level trigger system, which, at its first two trigger levels (LVL1+LVL2), reduces the initial bunch crossing rate of 40 MHz to ~3 kHz. At this rate, the Event Builder collects the data from the readout system PCs (ROSs) and provides fully assembled events to the Event Filter (EF). The EF is the third trigger level and its aim is to achieve a further rate reduction to ~200 Hz on the permanent storage. The Event Builder is based on a farm of 0(100) PCs, interconnected via a Gigabit Ethernet to 0(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs, and substantial fractions of the Event Builder and Event Filter PCs have been installed and commissioned. We report on performance tests on this initial system, which is capable of going beyond the required data rates and bandwidths for Event Building for the ATLAS experiment.


IEEE Transactions on Nuclear Science | 2006

Online muon reconstruction in the ATLAS level-2 trigger system

S.R. Armstrong; A. Dos Anjos; John Baines; C. P. Bee; M. Biglietti; J. A. Bogaerts; V. Boisvert; M. Bosman; B. Caron; P. Casado; G. Cataldi; D. Cavalli; M. Cervetto; G. Comune; Pc Muino; A. De Santo; A. Di Mattia; M.D. Gomez; M. Dosil; N. Ellis; D. Emeliyanov; B. Epp; S. Falciano; A. Farilla; Simon George; V. M. Ghete; S. Gonzalez; M. Grothe; S. Kabana; A. Khomich

To cope with the 40 MHz event production rate of LHC, the trigger of the ATLAS experiment selects events in three sequential steps of increasing complexity and accuracy whose final results are close to the offline reconstruction. The Level-1, implemented with custom hardware, identifies physics objects within Regions of Interests and operates with a first reduction of the event rate to 75 kHz. The higher trigger levels, Level-2 and Level-3, provide a software based event selection which further reduces the event rate to about 100 Hz. This paper presents the algorithm (/spl mu/Fast) employed at Level-2 to confirm the muon candidates flagged by the Level-1. /spl mu/Fast identifies hits of muon tracks inside the barrel region of the Muon Spectrometer and provides a precise measurement of the muon momentum at the production vertex. The algorithm must process the Level-1 muon output rate (/spl sim/20 kHz), thus particular care has been taken for its optimization. The result is a very fast track reconstruction algorithm with good physics performance which, in some cases, approaches that of the offline reconstruction: it finds muon tracks with an efficiency of about 95% and computes p/sub T/ of prompt muons with a resolution of 5.5% at 6 GeV and 4.0% at 20 GeV. The algorithm requires an overall execution time of /spl sim/1 ms on a 100 SpecInt95 machine and has been tested in the online environment of the Atlas detector test beam.


IEEE Transactions on Nuclear Science | 2008

Performance of the Final Event Builder for the ATLAS Experiment

H. P. Beck; M. Abolins; A. Battaglia; R. E. Blair; A. Bogaerts; M. Bosman; M. D. Ciobotaru; R. Cranfield; G. Crone; J. W. Dawson; R. Dobinson; M. Dobson; A. Dos Anjos; G. Drake; Y. Ermoline; R. Ferrari; M. L. Ferrer; D. Francis; S. Gadomski; S. Gameiro; B. Gorini; B. Green; W. Haberichter; C. Haberli; R. Hauser; Christian Hinkelbein; R. E. Hughes-Jones; M. Joos; G. Kieft; S. Klous

Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment by a three level trigger system, which reduces the initial bunch crossing rate of 40 MHz at its first two trigger levels (LVL1+LVL2) to ~3 kHz. At this rate the Event-Builder collects the data from all Read-Out system PCs (ROSs) and provides fully assembled events to the the Event-Filter (EF), which is the third level trigger, to achieve a further rate reduction to ~200 Hz for permanent storage. The Event-Builder is based on a farm of 0 (100) PCs, interconnected via Gigabit Ethernet to 0 (150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs and one third of the Event-Builder PCs are already installed and commissioned. Performance measurements have been exercised on this initial system, which show promising results that the required final data rates and bandwidth for the ATLAS event builder are in reach.


IEEE Transactions on Nuclear Science | 2006

Implementation and performance of the seeded reconstruction for the ATLAS event filter

C. Santamarina; Pc Muino; A. Dos Anjos; S.R. Armstrong; Jt Baines; C. P. Bee; M. Biglietti; J. A. Bogaerts; M. Bosman; B. Caron; P. Casado; G. Cataldi; D. Cavalli; G. Comune; G. Crone; D. Damazio; A. De Santo; M.D. Gomez; A. Di Mattia; N. Ellis; D. Emeliyanov; B. Epp; S. Falciano; H. Garitaonandia; Simon George; A.G. Mello; V. M. Ghete; R. Gonçalo; J. Haller; S. Kabana

ATLAS is one of the four major Large Hadron Collider (LHC) experiments that will start data taking in 2007. It is designed to cover a wide range of physics topics. The ATLAS trigger system has to be able to reduce an initial 40 MHz event rate, corresponding to an average of 23 proton-proton inelastic interactions per every 25 ns bunch crossing, to 200 Hz admissible by the Data Acquisition System. The ATLAS trigger is divided in three different levels. The first one provides a signal describing an event signature using dedicated custom hardware. This signature must be confirmed by the High Level Trigger (HLT) which using commercial computing farms performs an event reconstruction by running a sequence of algorithms. The validity of a signature is checked after every algorithm execution. A main characteristic of the ATLAS HLT is that only the data in a certain window around the position flagged by the first level trigger are analyzed. In this work, the performance of one sequence that runs at the Event Filter level (third level) is demonstrated. The goal of this sequence is to reconstruct and identify high transverse momentum electrons by performing cluster reconstruction at the electromagnetic calorimeter, track reconstruction at the Inner Detector, and cluster track matching.

Collaboration


Dive into the S. Sushkov's collaboration.

Top Co-Authors

Avatar

A. Dos Anjos

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Falciano

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

G. Comune

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

S.R. Armstrong

Brookhaven National Laboratory

View shared research outputs
Top Co-Authors

Avatar

M. Biglietti

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

D. Emeliyanov

Rutherford Appleton Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge