O. Oberst
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by O. Oberst.
ieee international conference on high performance computing data and analytics | 2006
Volker Büge; Yves Kemp; M. Kunze; O. Oberst; Gunter Quast
Computing clusters of High Energy Physics institutes at universities are often shared between different user groups, having their own requirements concerning the computing infrastructure. Those requirements can lead to incompatibilities between the needed operating systems, software packages, access policies or different grid middlewares. Some of the above incompatibilities can be solved by providing different portal machines for each group. Incompatibilities at the level of the shared worker nodes of the cluster are, however, difficult to overcome. In this paper, an approach to overcome this incompatibility using the virtualization technique Xen is presented. Each physical worker node hosts different virtual machines, acting as virtual worker node for every group supported at the site. Ways to integrate this into an existing batch queue are shown. The performance of different programs used in High Energy Physics on native and virtual machines are also presented.
Journal of Physics: Conference Series | 2010
Volker Büge; Hermann Hessling; Yves Kemp; M. Kunze; O. Oberst; Gunter Quast; A. Scheurer; Owen Synge
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Journal of Physics: Conference Series | 2012
O. Oberst; Thomas Hauth; David Kernert; Stephan Riedel; Gunter Quast
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
Journal of Physics: Conference Series | 2011
A. Scheurer; O. Oberst; Volker Büge; Gunter Quast; M. Kunze
There are various use cases where one has to separate user environments from the provided machine hardware, operating system or software setup on a cluster. To achieve this, a lightweight implementation of dynamically virtualized worker nodes in common batch systems is presented. Additionally, a summary of experiences and performance measurements obtained by running such a virtualized cluster, shared between nine different departments of the Karlsruhe Institute of Technology, is presented.