T. Hauth
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by T. Hauth.
Journal of Physics: Conference Series | 2016
Konrad Meier; Georg Fleig; T. Hauth; Michael Janczyk; Gunter Quast; Dirk von Suchodoletz; Bernd Wiebelt
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institutes computing system, will be described.
Journal of Physics: Conference Series | 2017
Nils Braun; T. Hauth; C Pulvermacher; Martin Ritter
Todays analyses for high-energy physics (HEP) experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts – often written in Python or ROOT macros which call complex compiled algorithms in the background – to perform fitting procedures and generate plots. During recent years interactive programming environments, such as Jupyter, became popular. Jupyter allows to develop Python-based applications, so-called notebooks, which bundle code, documentation and results, e.g. plots. Advantages over classical script-based approaches is the feature to recompute only parts of the analysis code, which allows for fast and iterative development, and a web-based user frontend, which can be hosted centrally and only requires a browser on the user side. In our novel approach, Python and Jupyter are tightly integrated into the Belle II Analysis Software Framework (basf2), currently being developed for the Belle II experiment in Japan. This allows to develop code in Jupyter notebooks for every aspect of the event simulation, reconstruction and analysis chain. These interactive notebooks can be hosted as a centralized web service via jupyterhub with docker and used by all scientists of the Belle II Collaboration. Because of its generality and encapsulation, the setup can easily be scaled to large installations.
21st International Conference On Computing in High Energy and Nuclear Physics (Chep2015),Parts 1-9 | 2015
Thomas Kuhr; T. Hauth
Belle II is a next generation B-factory experiment that will collect 50 times more data than its predecessor Belle. This requires not only a major upgrade of the detector hardware, but also of the simulation, reconstruction, and analysis software. The challenges of the software development at Belle II and the tools and procedures to address them are reviewed in this article.
DPG Frühjahrstagung der Sektion Materie und Kosmos (SMuK), Fachverband Teilchenphysik, Würzburg, 19.-23.März 2018 | 2018
F. U. Bernlochner; T. Hauth; M. Heck; Felix Metzner; Eugenio Paoloni; Sebastian Racs
DPG Frühjahrstagung der Sektion Materie und Kosmos (SMuK), Fachverband Teilchenphysik, Würzburg, 19.-23.März 2018 | 2018
F. U. Bernlochner; Nils Braun; Michael Eliachevitch; T. Hauth; M. Heck
The European physical journal / Web of Conferences | 2017
Tadeas Bilka; Nils Braun; G. Casarosa; Oliver Frost; Rudolf Frühwirth; T. Hauth; M. Heck; Jakub Kandra; P. Kodys; P. Kvasnička; Jakob Lettenbichler; Thomas Lück; Thomas Madlener; Felix Metzner; Moritz Nadler; Benjamin Oberhof; Eugenio Paoloni; M. Prim; Martin Ritter; Tobias Schlüter; Michael Schnell; Bjoern Spruck; Viktor Trusov; Jonas Wagner; Christian Wessel; Michael Ziegler