Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T. Alt is active.

Publication


Featured researches published by T. Alt.


IEEE Transactions on Nuclear Science | 2011

ALICE HLT High Speed Tracking on GPU

S. Gorbunov; David Rohr; K. Aamodt; T. Alt; H. Appelshäuser; A. Arend; M. Bach; Bruce Becker; Stefan Bottger; Timo Breitner; Henner Busching; S. Chattopadhyay; J. Cleymans; C. Cicalò; I. Das; Øystein Djuvsland; Heikofname Engel; Hege Austrheim Erdal; R. Fearick; Ø. Haaland; P. T. Hille; S. Kalcher; K. Kanaki; U. Kebschull; I. Kisel; M. Kretz; Camilo Lara; S. Lindal; V. Lindenstruth; Arshad Ahmad Masoodi

The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 300 central events per second in heavy-ion collisions, corresponding to an input data stream of 30 GB/s. In order to fulfill the time requirements, a fast on-line tracker has been developed. The algorithm combines a Cellular Automaton method being used for a fast pattern recognition and the Kalman Filter method for fitting of found trajectories and for the final track selection. The tracker was adapted to run on Graphics Processing Units (GPU) using the NVIDIA Compute Unified Device Architecture (CUDA) framework. The implementation of the algorithm had to be adjusted at many points to allow for an efficient usage of the graphics cards. In particular, achieving a good overall workload for many processor cores, efficient transfer to and from the GPU, as well as optimized utilization of the different memories the GPU offers turned out to be critical. To cope with these problems a dynamic scheduler was introduced, which redistributes the workload among the processor cores. Additionally a pipeline was implemented so that the tracking on the GPU, the initialization and the output processed by the CPU, as well as the DMA transfer can overlap. The GPU tracking algorithm significantly outperforms the CPU version for large events while it entirely maintains its efficiency.


ieee nuclear science symposium | 2005

The ALICE TPC readout control unit

C.G. Gutierrez; R. Campagnolo; A. Junique; L. Musa; J. Alme; J. Lien; B. Pommersche; M. Richter; K. Røed; D. Rohrich; K. Ullaland; T. Alt

The front end electronics for the ALICE time projection chamber (TPC) consists of about 560000 channels packed in 128-channel units (front end card). Every front end card (FEC) incorporates the circuits to amplify, shape, digitize, process and buffer the TPC pad signals. From the control and readout point of view the FECs are organized in 216 partitions, each being an independent system steered by one readout control unit (RCU). The RCU, which is physically part of the on-detector electronics, implements the interface to the data acquisition (DAQ), the trigger and timing circuit (TTC) and the detector control system (DCS). It broadcasts the trigger and clock information to the FECs, performs the initialization and readout via a high bandwidth bus, and implements monitoring and safety control functions via a dedicated I2C-like link. This paper addresses the architecture and the system performance of the RCU


Journal of Physics: Conference Series | 2012

ALICE HLT TPC Tracking of Pb-Pb Events on GPUs

D. Rohr; A. Szostak; M. Kretz; T. Kollegger; T. Breitner; T. Alt; S. Gorbunov

The online event reconstruction for the ALICE experiment at CERN requires processing capabilities to process central Pb-Pb collisions at a rate of more than 200 Hz, corresponding to an input data rate of about 25 GB/s. The reconstruction of particle trajectories in the Time Projection Chamber (TPC) is the most compute intensive step. The TPC online tracker implementation combines the principle of the cellular automaton and the Kalman filter. It has been accelerated by the usage of graphics cards (GPUs). A pipelined processing allows to perform the tracking on the GPU, the data transfer, and the preprocessing on the CPU in parallel. In order for CPU pre- and postprocessing to keep step with the GPU the pipeline uses multiple threads. A splitting of the tracking in multiple phases searching for short local track segments first improves data locality and makes the algorithm suited to run on a GPU. Due to special optimizations this course of action is not second to a global approach. Because of non-associative floating-point arithmetic a binary comparison of GPU and CPU tracker is infeasible. A track by track and cluster by cluster comparison shows a concordance of 99.999%. With current hardware, the GPU tracker outperforms the CPU version by about a factor of three leaving the processor still available for other tasks.


Journal of Instrumentation | 2013

RCU2 — The ALICE TPC readout electronics consolidation for Run2

J. Alme; T. Alt; Lars Bratrud; P. Christiansen; F. Costa; Erno David; T. Gunji; Tivadar Kiss; R. Langoy; J. Lien; Christian Lippmann; A. Oskarsson; A. Ur Rehman; K. Røed; D. Röhrich; A. Tarantola; C. Torgersen; I. Nikolai Torsvik; K. Ullaland; A. Velure; Shiming Yang; C. Zhao; H. Appelshaeuser; Lennart Österman

This paper presents the solution for optimization of the ALICE TPC readout for running at full energy in the Run2 period after 2014. For the data taking with heavy ion beams an event readout rate of 400 Hz with a low dead time is envisaged for the ALICE central barrel detectors during these three years. A new component, the Readout Control Unit 2 (RCU2), is being designed to increase the present readout rate by a factor of up to 2.6. The immunity to radiation induced errors will also be significantly improved by the new design.


IEEE Transactions on Nuclear Science | 2008

High Level Trigger Applications for the ALICE Experiment

M. Richter; K. Aamodt; T. Alt; S. Bablok; C. Cheshkov; P. T. Hille; V. Lindenstruth; G. Øvrebekk; M. Płoskoń; S. Popescu; D. Rohrich; T. Steinbeck; J. Thäder

For the ALICE experiment at the LHC, a high level trigger system for on-line event selection and data compression has been developed and a computing cluster of several hundred dual-processor nodes is being installed. A major system integration test was carried out during the commissioning of the time projection chamber (TPC), where the HLT also provides a monitoring system. All major parts like a small computing cluster, hardware input devices, the on-line data transportation framework, and the HLT analysis could be tested successfully. A common interface for HLT processing components has been designed to run the components from either the on-line or off-line analysis framework without changes. The interface adapts the component to the needs of the on-line processing and allows the developer at the same time to use the off-line framework for easy development, debugging, and benchmarking. Results can be compared directly. For the upcoming commissioning of the whole detector, the HLT is currently prepared to run on-line data analysis for the main detectors, e.g. the inner tracking system (ITS), the TPC, and the transition radiation detector (TRD). The HLT processing capability is indispensable for the photon spectrometer (PHOS), where the on-line pulse shape analysis reduces the data volume by a factor 20. A common monitoring framework is in place and detector calibration algorithms have been ported to the HLT. The article describes briefly the architecture of the HLT system. It focuses on typical applications and component development.


Journal of Instrumentation | 2016

First performance results of the ALICE TPC Readout Control Unit 2

C. Zhao; J. Alme; T. Alt; H. Appelshäuser; Lars Bratrud; A. Castro; F. Costa; Erno David; Tako Gunji; S. Kirsch; Tivadar Kiss; R. Langoy; J. Lien; Christian Lippmann; A. Oskarsson; A. Rehman; K. Røed; D. Röhrich; Yoko Sekiguchi; Meghan Stuart; K. Ullaland; A. Velure; Shiming Yang; Lennart Österman

This paper presents the first performance results of the ALICE TPC Readout Control Unit 2 (RCU2). With the upgraded hardware typology and the new readout scheme in FPGA design, the RCU2 is designed to achieve twice the readout speed of the present Readout Control Unit. Design choices such as using the flash-based Microsemi Smartfusion2 FPGA and applying mitigation techniques in interfaces and FPGA design ensure a high degree of radiation tolerance. This paper presents the system level irradiation test results as well as the first commissioning results of the RCU2. Furthermore, it will be concluded with a discussion of the planned updates in firmware.


Archive | 2015

Inclusive photon production at forward rapidities in proton–proton collisions at √s = 0.9, 2.76 and 7 TeV - eScholarship

C Alice; B. Abelev; J. Adam; D. Adamová; M. M. Aggarwal; Gianluca Aglieri Rinella; M. Agnello; A. Agostinelli; N. Agrawal; Z. Ahammed; N. Ahmad; I. Ahmed; Su Ahn; S. A. Ahn; I. Aimo; S. Aiola; M. Ajaz; A. Akindinov; Sk Noor Alam; D. Aleksandrov; B. Alessandro; D. Alexandre; A. Alici; A. Alkin; J. Alme; T. Alt; S. Altinpinar; I. Altsybeev; C. Alves Garcia Prado; C. Andrei

The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities (2.3 < η < 3.9) in proton–proton collisions at three center-of-mass energies, √ s = 0.9, 2.76 and 7 TeV using the ALICE detector. It is observed that the increase in the average photon multiplicity as a function of beam energy is compatible with both a logarithmic and a power-law dependence. The relative increase in average photon multiplicity produced in inelastic pp collisions at 2.76 and 7 TeV center-of-mass energies with respect to 0.9 TeV are 37.2 ± 0.3 % (stat) ± 8.8 % (sys) and 61.2 ± 0.3 % (stat) ± 7.6 % (sys), respectively. The photon multiplicity distributions for all center-of-mass energies are well described by negative binomial distributions. The multiplicity distributions are also presented in terms of KNO variables. The results are compared to model predictions, which are found in general to underestimate the data at large photon multiplicities, in particular at the highest center-of-mass energy. Limiting fragmentation behavior of photons has been explored with the data, but is not observed in the measured pseudorapidity range.


Journal of Physics: Conference Series | 2014

O2: A novel combined online and offline computing system for the ALICE Experiment after 2018

Ananya; A Alarcon Do Passo Suaide; C. Alves Garcia Prado; T. Alt; L. Aphecetche; N Agrawal; A Avasthi; M. Bach; R. Bala; G. G. Barnaföldi; A. Bhasin; J. Belikov; F. Bellini; L. Betev; T. Breitner; P. Buncic; F. Carena; S. Chapeland; V. Chibante Barroso; F Cliff; F. Costa; L Cunqueiro Mendez; Sadhana Dash; C Delort; E. Dénes; R. Divià; B. Doenigus; H. Engel; D. Eschweiler; U. Fuchs

ALICE (A Large Ion Collider Experiment) is a detector dedicated to the studies with heavy ion collisions exploring the physics of strongly interacting nuclear matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shutdown of the LHC, the ALICE Experiment will be upgraded to make high precision measurements of rare probes at low pT, which cannot be selected with a trigger, and therefore require a very large sample of events recorded on tape. The online computing system will be completely redesigned to address the major challenge of sampling the full 50 kHz Pb-Pb interaction rate increasing the present limit by a factor of 100. This upgrade will also include the continuous un-triggered read-out of two detectors: ITS (Inner Tracking System) and TPC (Time Projection Chamber)) producing a sustained throughput of 1 TB/s. This unprecedented data rate will be reduced by adopting an entirely new strategy where calibration and reconstruction are performed online, and only the reconstruction results are stored while the raw data are discarded. This system, already demonstrated in production on the TPC data since 2011, will be optimized for the online usage of reconstruction algorithms. This implies much tighter coupling between online and offline computing systems. An R&D program has been set up to meet this huge challenge. The object of this paper is to present this program and its first results.


ieee-npss real-time conference | 2010

ALICE HLT high speed tracking and vertexing

S. Gorbunov; K. Aamodt; T. Alt; H. Appelshäuser; A. Arend; Bruce Becker; S. Böttger; T. Breitner; H. Büsching; S. Chattopadhyay; J. Cleymans; I. Das; Øystein Djuvsland; H. Erdal; R. Fearick; Ø. Haaland; P. T. Hille; S. Kalcher; K. Kanaki; U. Kebschull; I. Kisel; M. Kretz; C. Lara; S. Lindal; V. Lindenstruth; A. A. Masoodi; G. Øvrebekk; R. Panse; J. Peschek; M. Ploskon

The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 200 central events per second in heavy-ion collisions, corresponding to an input data stream of 30 GB/s.


Journal of Instrumentation | 2016

The ALICE high-level trigger read-out upgrade for LHC Run 2

H. Engel; T. Alt; T. Breitner; A. Gomez Ramirez; T. Kollegger; Mikolaj Krzewicki; J Lehrbach; D. Rohr; U. Kebschull

The ALICE experiment uses an optical read-out protocol called Detector Data Link (DDL) to connect the detectors with the computing clusters of Data Acquisition (DAQ) and High-Level Trigger (HLT). The interfaces of the clusters to these optical links are realized with FPGA-based PCI-Express boards. The High-Level Trigger is a computing cluster dedicated to the online reconstruction and compression of experimental data. It uses a combination of CPU, GPU and FPGA processing. For Run 2, the HLT has replaced all of its previous interface boards with the Common Read-Out Receiver Card (C-RORC) to enable read-out of detectors at high link rates and to extend the pre-processing capabilities of the cluster. The new hardware also comes with an increased link density that reduces the number of boards required. A modular firmware approach allows different processing and transport tasks to be built from the same source tree. A hardware pre-processing core includes cluster finding already in the C-RORC firmware. State of the art interfaces and memory allocation schemes enable a transparent integration of the C-RORC into the existing HLT software infrastructure. Common cluster management and monitoring frameworks are used to also handle C-RORC metrics. The C-RORC is in use in the clusters of ALICE DAQ and HLT since the start of LHC Run 2.

Collaboration


Dive into the T. Alt's collaboration.

Top Co-Authors

Avatar

J. Alme

University of Bergen

View shared research outputs
Top Co-Authors

Avatar

D. Adamová

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

B. Alessandro

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

A. Alici

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Agnello

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

C. Andrei

Austrian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

I. Altsybeev

Saint Petersburg State University

View shared research outputs
Top Co-Authors

Avatar

A. Alkin

National Academy of Sciences of Ukraine

View shared research outputs
Researchain Logo
Decentralizing Knowledge