Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where P. Zelnicek is active.

Publication


Featured researches published by P. Zelnicek.


Journal of Physics: Conference Series | 2011

Autonomous System Management for the ALICE High-Level-Trigger Cluster using the SysMES framework

Stefan Boettger; Timo Breitner; U. Kebschull; Camilo Lara; J. Ulrich; P. Zelnicek

The ALICE HLT cluster is a heterogeneous computer cluster currently consisting of 200 nodes. This cluster is used for on-line processing of data produced by the ALICE detector during the next 10 or more years of operation. A major management challenge is to reduce the number of manual interventions in case of failures. Classical approaches like monitoring tools lack mechanisms to detect situations with multiple failure conditions and to automatically react to such situations. We have therefore developed SysMES (System Management for networked Embedded Systems and Clusters), a decentralized, fault tolerant, tool-set for autonomous management. It comprises a monitoring facility for detecting the working states of the distributed resources, a central interface for visualizing and managing the cluster environment and a rule system for coupling of the monitoring and management aspects. We have developed a formal language by which an administrator can define complex spatial and temporal conditions for failure states and according reactions. For the HLT we have defined a set of rules for known and recurring problem states such that SysMES takes care of most of day-to-day administrative work.


ieee-npss real-time conference | 2010

ALICE HLT high speed tracking and vertexing

S. Gorbunov; K. Aamodt; T. Alt; H. Appelshäuser; A. Arend; Bruce Becker; S. Böttger; T. Breitner; H. Büsching; S. Chattopadhyay; J. Cleymans; I. Das; Øystein Djuvsland; H. Erdal; R. Fearick; Ø. Haaland; P. T. Hille; S. Kalcher; K. Kanaki; U. Kebschull; I. Kisel; M. Kretz; C. Lara; S. Lindal; V. Lindenstruth; A. A. Masoodi; G. Øvrebekk; R. Panse; J. Peschek; M. Ploskon

The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 200 central events per second in heavy-ion collisions, corresponding to an input data stream of 30 GB/s.


ieee-npss real-time conference | 2010

Event Reconstruction Performance of the ALICE High Level Trigger for

M. Richter; K. Aamodt; T. Alt; H. Appelshäuser; A. Arend; Bruce Becker; S. Böttger; T. Breitner; H. Büsching; C. Cicalo; S. Chattopadhyay; J. Cleymans; I. Das; Øystein Djuvsland; H. Erdal; R. Fearick; S. Gorbunov; Ø. Haaland; P. T. Hille; S. Kalcher; K. Kanaki; U. Kebschull; I. Kisel; M. Kretz; C. Lara; S. Lindal; V. Lindenstruth; A. A. Masoodi; G. Øvrebekk; R. Panse

The ALICE High Level Trigger comprises a large computing cluster, dedicated interfaces and software applications. It allows on-line event reconstruction of the full data stream of the ALICE experiment at up to 25 GByte/s. The commissioning campaign has passed an important phase since the startup of the Large Hadron Collider in November 2009. The system has been transferred into continuous operation with focus on the event reconstruction and first simple trigger applications. The paper reports for the first time on the achieved event reconstruction performance in the ALICE central barrel region.


International Conference On Computing In High Energy And Nuclear Physics (Chep 2010) | 2011

{\rm p} + {\rm p}

Marco Meoni; Stefan Boettger; P. Zelnicek; V. Lindenstruth; U. Kebschull

The HLT (High-Level Trigger) group of the ALICE experiment at the LHC has prepared a virtual Parallel ROOT Facility (PROOF) enabled cluster (HAF - HLT Analysis Facility) for fast physics analysis, detector calibration and reconstruction of data samples. The HLT-Cluster currently consists of 2860 CPU cores and 175TB of storage. Its purpose is the online filtering of the relevant part of data produced by the particle detector. However, data taking is not running continuously and exploiting unused cluster resources for other applications is highly desirable and improves the usage-cost ratio of the HLT cluster. As such, unused computing resources are dedicated to a PROOF-enabled virtual cluster available to the entire collaboration. This setup is especially aimed at the prototyping phase of analyses that need a high number of development iterations and a short response time, e. g. tuning of analysis cuts, calibration and alignment. HAF machines are enabled and disabled upon user request to start or complete analysis tasks. This is achieved by a virtual machine scheduling framework which dynamically assigns and migrates virtual machines running PROOF workers to unused physical resources. Using this approach we extend the HLT usage scheme to running both online and offline computing, thereby optimizing the resource usage.

Collaboration


Dive into the P. Zelnicek's collaboration.

Top Co-Authors

Avatar

U. Kebschull

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

V. Lindenstruth

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Erdal

University of Bergen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Arend

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

H. Appelshäuser

Goethe University Frankfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge