Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Scheurer is active.

Publication


Featured researches published by A. Scheurer.


Journal of Physics: Conference Series | 2011

Dynamic Extensions of Batch Systems with Cloud Resources

T Hauth; Gunter Quast; M. Kunze; Volker Büge; A. Scheurer; C Baun

Compute clusters use Portable Batch Systems (PBS) to distribute workload among individual cluster machines. To extend standard batch systems to Cloud infrastructures, a new service monitors the number of queued jobs and keeps track of the price of available resources. This meta-scheduler dynamically adapts the number of Cloud worker nodes according to the requirement profile. Two different worker node topologies are presented and tested on the Amazon EC2 Cloud service.


Journal of Physics: Conference Series | 2010

Integration of virtualized worker nodes in standard batch systems

Volker Büge; Hermann Hessling; Yves Kemp; M. Kunze; O. Oberst; Gunter Quast; A. Scheurer; Owen Synge

Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.


Journal of Physics: Conference Series | 2010

Site specific monitoring of multiple information systems ? the HappyFace Project

Volker Büge; Viktor Mauch; Gunter Quast; A. Scheurer; Artem Trunov

An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Sharing such resources in a grid infrastructure, like the Worldwide LHC Computing Grid (WLCG), goes ahead with a large number of external monitoring systems, offering information on the status of the services and user jobs at a grid site. This huge flood of information from many different sources retards the identification of problems and complicates the local administration. In addition, the web interfaces for the access to the site specific information are often very slow and uncomfortable to use. A meta-monitoring system which automatically queries the different relevant monitoring systems could provide a fast and comfortable access to all important information for the local administration. It becomes also feasible to easily correlate information from different sources and provides an easy access also for non-expert users. In this paper, we describe the HappyFace Project, a modular software framework for such purpose. It queries existing monitoring sources and processes the results to provide a single point of entrance for information on a grid site and its specific services.


Journal of Physics: Conference Series | 2011

Virtualized Batch Worker Nodes: Conception and Integration in HPC Environments

A. Scheurer; O. Oberst; Volker Büge; Gunter Quast; M. Kunze

There are various use cases where one has to separate user environments from the provided machine hardware, operating system or software setup on a cluster. To achieve this, a lightweight implementation of dynamically virtualized worker nodes in common batch systems is presented. Additionally, a summary of experiences and performance measurements obtained by running such a virtualized cluster, shared between nine different departments of the Karlsruhe Institute of Technology, is presented.


Journal of Physics: Conference Series | 2010

Experience building and operating the CMS Tier-1 computing centres

M Albert; J. A. Bakken; D. Bonacorsi; C. Brew; C. Charlot; Chih-Hao Huang; D. Colling; C Dumitrescu; D Fagan; F. Fassi; I. Fisk; J. Flix; L Giacchetti; G Gomez-Ceballos; S. J. Gowdy; C. Grandi; O. Gutsche; K. A. Hahn; B. Holzman; J. Jackson; P. Kreuzer; C M Kuo; D. Mason; N Pukhaeva; G. Qin; Gunter Quast; P. Rossman; A Sartirana; A. Scheurer; G. Schott

The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.


Journal of Physics: Conference Series | 2011

The HappyFace Project

Viktor Mauch; C. Ay; Stefan Birkholz; Volker Büge; A. Burgmeier; Jorg Manfred Meyer; F. Nowak; A. Quadt; Gunter Quast; P. Sauerland; A. Scheurer; P. Schleper; H. Stadie; Oleg Tsigenov; Marian Zvada


German e-Science Conference | 2007

Challenges of the LHC Computing Grid by the CMS experiment

A. Scheurer; Michael Ernst; Alexander Flossdorf; Carsten Hof; Thomas Kress; Klaus Rabbertz; Gunter Quast

Collaboration


Dive into the A. Scheurer's collaboration.

Top Co-Authors

Avatar

Gunter Quast

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Volker Büge

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Kunze

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

O. Oberst

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Viktor Mauch

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

A. Burgmeier

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

A. Quadt

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar

Artem Trunov

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

C Baun

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

C. Ay

University of Göttingen

View shared research outputs
Researchain Logo
Decentralizing Knowledge