Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lars Nagel is active.

Publication


Featured researches published by Lars Nagel.


ieee/acm international conference utility and cloud computing | 2013

Scalable Monitoring System for Clouds

André Brinkmann; Christoph Fiehe; Anna Litvina; Ingo Lück; Lars Nagel; Krishnaprasad Narayanan; Florian Ostermair; Wolfgang Thronicke

Although cloud computing has become an important topic over the last couple of years, the development of cloud-specific monitoring systems has been neglected. This is surprising considering their importance for metering services and, thus, being able to charge customers. In this paper we introduce a monitoring architecture that was developed and is currently implemented in the EASI-CLOUDS project. The demands on cloud monitoring systems are manifold. Regular checks of the SLAs and the precise billing of the resource usage, for instance, require the collection and converting of infrastructure readings in short intervals. To ensure the scalability of the whole cloud, the monitoring system must scale well without wasting resources. In our approach, the monitoring data is therefore organized in a distributed and easily scalable tree structure and it is based on the Device Management Specification of the OMA and the DMT Admin Specification of the OSGi. Its core component includes the interface, the root of the tree and extension points for sub trees which are implemented and locally managed by the data suppliers themselves. In spite of the variety and the distribution of the data, their access is generic and location-transparent. Besides simple suppliers of monitoring data, we outline a component that provides the means for storing and preprocessing data. The motivation for this component is that the monitoring system can be adjusted to its subscribers - while it usually is the other way round. In EASI-CLOUDS, the so-called Context Stores aggregate and prepare data for billing and other cloud components.


Proceedings of the 2nd International Workshop on CrossCloud Systems | 2014

FederatedCloudSim: a SLA-aware federated cloud simulation framework

Andreas Kohne; Marc Spohr; Lars Nagel; Olaf Spinczyk

Recent developments show that the standardization of cloud service descriptions and exchange leads the way for the rise of cloud federations. In these federations CSPs (cloud service providers) can use resources of other CSPs in the case of a lack of local resources or they can add remote services to their catalogue. Cloud federations again demand for cloud brokers which offer the resources and services of different CSPs transparently to the users. Research of cloud federations in the real world is very complex and expensive as distributed hard- and software scenarios are needed. Therefore we present FederatedCloudSim, a very flexible cloud simulation framework that can be used to simulate many federated cloud scenarios while respecting SLAs (service level agreements). In this paper we also present first simulation results and explain different simulation scenarios.


european conference on parallel processing | 2014

Migration Techniques in HPC Environments

Simon Pickartz; Ramy Gad; Stefan Lankes; Lars Nagel; Tim Süß; André Brinkmann; Stephan Krempel

Process migration is an important feature in modern computing centers as it allows for a more efficient use and maintenance of hardware. Especially in virtualized infrastructures it is successfully exploited by schemes for load balancing and energy efficiency. One can divide the tools and techniques into three groups: Process-level migration, virtual machine migration, and container-based migration.


acm symposium on parallel algorithms and architectures | 2014

Scheduling shared continuous resources on many-cores

André Brinkmann; Peter Kling; Friedhelm Meyer auf der Heide; Lars Nagel; Sören Riechers; Tim Süß

We consider the problem of scheduling a number of jobs on m identical processors sharing a continuously divisible resource. Each job j comes with a resource requirement rj∈[0,1]. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their processing speeds, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature. In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.


international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2012

Multiple-Choice Balanced Allocation in (Almost) Parallel

Petra Berenbrink; Artur Czumaj; Matthias Englert; Thomas Friedetzky; Lars Nagel

We consider the problem of resource allocation in a parallel environment where new incoming resources are arriving online in groups or batches.


Computer Communications | 2015

Smart grid-aware scheduling in data centres

Markus Mäsker; Lars Nagel; André Brinkmann; Foad Lotfifar; Matthew Johnson

In several countries the expansion and establishment of renewable energies result in widely scattered and often weather-dependent energy production, decoupled from energy demand. Large, fossil-fuelled power plants are gradually replaced by many small power stations that transform wind, solar and water power into electrical power. This leads to changes in the historically evolved power grid that favours top-down energy distribution from a backbone of large power plants to widespread consumers. Now, with the increase of energy production in lower layers of the grid, there is also a bottom-up flow of the grid infrastructure compromising its stability. In order to locally adapt the energy demand to the production, some countries have started to establish Smart Grids to incentivise customers to consume energy when it is generated. This paper investigates how data centres can benefit from variable energy prices in Smart Grids. In view of their low average utilisation, data centre providers can schedule the workload dependent on the energy price. We consider a scenario for a data centre in Paderborn, Germany, hosting a large share of interruptible and migratable computing jobs. We suggest and compare two scheduling strategies for minimising energy costs. The first one merely uses current values from the Smart Meter to place the jobs, while the other one also estimates the future energy price in the grid based on weather forecasts. In spite of the complexity of the prediction problem and the inaccuracy of the weather data, both strategies perform well and have a strong positive effect on the utilisation of renewable energy and on the reduction of energy costs. Our experiments and cost analysis show that our simple-to-apply low-cost strategies reach utilisations and savings close to the optimum.


acm symposium on parallel algorithms and architectures | 2010

Balls into bins with related random choices

Petra Berenbrink; André Brinkmann; Tom Friedetzky; Lars Nagel

We consider a variation of classical <i>ball-into-bins</i> games. We randomly allocate <i>m</i> balls into ◊<i>n</i> bins. Following Godfreys model [6], we assume that each ball <i>i</i> comes with a β-balanced set of clusters of bins Β<sub>i</sub> = Β<sub>i</sub>,...Β<sub>si</sub>}. The condition of β-balancedness essentially enforces a uniform-like selection of bins, where the parameter β governs the deviation from uniformity. We use a more relaxed notion of balancedness than [6], and also generalise the concept to <i>deterministic balancedness</i>. Each ball <i>i</i>=1,...,<i>m</i>, in turn, runs the following protocol: (i) it <i>i.u.r. (independently and uniformly at random)</i> chooses a cluster of bins Β<sub>i</sub> ∈ Β<sub>i</sub>, and (ii) <i>i.u.r.</i> chooses one of the empty bins in Β<sub>i</sub> and allocates itself to it. Should the cluster not contain at least a single empty bin then the protocol fails. If the protocol terminates successfully, that is, every ball has indeed been able to find at least one empty bin in its chosen cluster, then this will obviously result in a maximum load of one. The main goal is to find a tight bound on the maximum number of balls, <i>m</i>, so that the protocol terminates successfully (with high probability). We improve on Godfreys result and show <i>m</i> = <i>n</i> ‾ Θ(β). This upper bound holds for all mentioned types of balancedness. It even holds when we generalise the model by allowing <i>runs</i>. In this extended model, motivated by P2P networks, each ball <i>i</i> tosses a coin, and with constant probability <i>p</i><sub>i</sub> (0 < <i>p</i><sub>i</sub> ≤ 1) it runs the protocol as described above, but with the remaining probability it copies the previous balls choice Β<sub>i</sub>_<sub>1</sub>, that is, it re-uses the previous cluster of bins.


ieee international symposium on parallel & distributed processing, workshops and phd forum | 2013

Distributing Storage in Cloud Environments

Petra Berenbrink; André Brinkmann; Tom Friedetzky; Dirk Meister; Lars Nagel

Cloud computing has a major impact on todays IT strategies. Outsourcing applications from IT departments to the cloud relieves users from building big infrastructures as well as from building the corresponding expertise, and allows them to focus on their main competences and businesses. One of the main hurdles of cloud computing is that not only the application, but also the data has to be moved to the cloud. Networking speed severely limits the amount of data that can travel between the cloud and the user, between different sites of the same cloud provider, or indeed between different cloud providers. It is therefore important to keep applications near the data itself. This paper investigates in which way load balancing of the computational resources as well as the data locality can be maintained at the same time. We apply recent results from balls-into-bins theory to test their applicability to cloud storage environments. We show that it is possible to both balance the load nearly perfectly and to keep the data close to its origin. The results are based on theoretical analyses and simulation of the underlying physical infrastructure of the Internet.


international conference on cluster computing | 2016

Deduplication Potential of HPC Applications’ Checkpoints

Jürgen Kaiser; Ramy Gad; Tim SuB; Federico Padua; Lars Nagel; André Brinkmann

HPC systems contain an increasing number of components, decreasing the mean time between failures. Checkpoint mechanisms help to overcome such failures for long-running applications. A viable solution to remove the resulting pressure from the I/O backends is to deduplicate the checkpoints. However, there is little knowledge about the potential to save I/Os for HPC applications by using deduplication within the checkpointing process. In this paper, we perform a broad study about the deduplication behavior of HPC application checkpointing and its impact on system design.


european conference on computer systems | 2016

Evaluation of SLA-based decision strategies for VM scheduling in cloud data centers

Andreas Kohne; Damian Pasternak; Lars Nagel; Olaf Spinczyk

Service level agreements (SLAs) gain more and more importance in the area of cloud computing. An SLA is a contract between a customer and a cloud service provider (CSP) in which the CSP guarantees functional and non-functional quality of service parameters for cloud services. Since CSPs have to pay for the hardware used as well as penalties for violating SLAs, they are eager to fulfill these agreements while at the same time optimizing the utilization of their resources. In this paper we examine SLA-aware VM scheduling strategies for cloud data centers. The service level objectives considered are resource usage and availability. The sample resources are CPU and RAM. They can be overprovisioned by the CSPs which is the main leverage to increase their revenue. The availability of a VM is affected by migrating it within and between data centers. To get realistic results, we simulate the effect of the strategies using the FederatedCloudSim framework and real-world workload traces of business-critical VMs. Our evaluation shows that there are considerable differences between the scheduling strategies in terms of SLA violations and the number of migrations. From all strategies considered, the combination of the Minimization of Migrations strategy for VM selection and the Worst Fit strategy for host selection achieves the best results.

Collaboration


Dive into the Lars Nagel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olaf Spinczyk

Technical University of Dortmund

View shared research outputs
Researchain Logo
Decentralizing Knowledge