Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikolas Herbst is active.

Publication


Featured researches published by Nikolas Herbst.


international conference on performance engineering | 2013

Self-adaptive workload classification and forecasting for proactive resource provisioning

Nikolas Herbst; Nikolaus Huber; Samuel Kounev; Erich Amrehn

As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis offers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real-world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.


software engineering for adaptive and self managing systems | 2015

BUNGEE: an elasticity benchmark for self-adaptive IaaS cloud environments

Nikolas Herbst; Samuel Kounev; Andreas Weber; Henning Groenda

Todays infrastructure clouds provide resource elasticity (i.e. Auto-scaling) mechanisms enabling self-adaptive resource provisioning to reflect variations in the load intensity over time. These mechanisms impact on the application performance, however, their effect in specific situations is hard to quantify and compare. To evaluate the quality of elasticity mechanisms provided by different platforms and configurations, respective metrics and benchmarks are required. Existing metrics for elasticity only consider the time required to provision and deprovision resources or the costs impact of adaptations. Existing benchmarks lack the capability to handle open workloads with realistic load intensity profiles and do not explicitly distinguish between the performance exhibited by the provisioned underlying resources, on the one hand, and the quality of the elasticity mechanisms themselves, on the other hand. In this paper, we propose reliable metrics for quantifying the timing aspects and accuracy of elasticity. Based on these metrics, we propose a novel approach for benchmarking the elasticity of Infrastructure-as-a-Service (IaaS) cloud platforms independent of the performance exhibited by the provisioned underlying resources. We show that the proposed metrics provide consistent ranking of elastic platforms on an ordinal scale. Finally, we present an extensive case study of real-world complexity demonstrating that the proposed approach is applicable in realistic scenarios and can cope with different levels of resource efficiency.


Proceedings of the third international workshop on Large scale testing | 2014

Modeling variations in load intensity over time

Jóakim von Kistowski; Nikolas Herbst; Samuel Kounev

Todays software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. Based on this observation, we identify the need for means allowing flexible definition of load profiles and address this by introducing two meta-models at different abstraction levels. At the lower abstraction level, the Descartes Load Intensity Meta-Model (DLIM) offers a structured and accessible way of describing the load intensity over time by editing and combining mathematical functions. The High-Level Descartes Load Intensity Meta-Model (HLDLIM) allows the description of load variations using few defined parameters that characterize the seasonal patterns, trends, bursts and noise parts. We demonstrate that both meta-models are capable of capturing real-world load profiles with acceptable accuracy through comparison with a real life trace.


software engineering for adaptive and self managing systems | 2015

Modeling and extracting load intensity profiles

Jóakim von Kistowski; Nikolas Herbst; Daniel Zoller; Samuel Kounev; Andreas Hotho

Todays system developers and operators face the challenge of creating software systems that make efficient use of dynamically allocated resources under highly variable and dynamic load profiles, while at the same time delivering reliable performance. Benchmarking of systems under these constraints is difficult, as state-of-the-art benchmarking frameworks provide only limited support for emulating such dynamic and highly variable load profiles for the creation of realistic workload scenarios. Industrial benchmarks typically confine themselves to workloads with constant or stepwise increasing loads. Alternatively, they support replaying of recorded load traces. Statistical load intensity descriptions also do not sufficiently capture concrete pattern load profile variations over time. To address these issues, we present the Descartes Load Intensity Model (DLIM). DLIM provides a modeling formalism for describing load intensity variations over time. A DLIM instance can be used as a compact representation of a recorded load intensity trace, providing a powerful tool for benchmarking and performance analysis. As manually obtaining DLIM instances can be time consuming, we present three different automated extraction methods, which also help to enable autonomous system analysis for self-adaptive systems. Model expressiveness is validated using the presented extraction methods. Extracted DLIM instances exhibit a median modeling error of 12.4% on average over nine different real-world traces covering between two weeks and seven months. Additionally, extraction methods perform orders of magnitude faster than existing time series decomposition approaches.


Proceedings of the 2nd International Workshop on Hot Topics in Cloud service Scalability | 2014

Towards a Resource Elasticity Benchmark for Cloud Environments

Andreas Weber; Nikolas Herbst; Henning Groenda; Samuel Kounev

Auto-scaling features offered by todays cloud infrastructures provide increased flexibility especially for customers that experience high variations in the load intensity over time. However, auto-scaling features introduce new system quality attributes when considering their accuracy, timing, and boundaries. Therefore, distinguishing between different offerings has become a complex task, as it is not yet supported by reliable metrics and measurement approaches. In this paper, we discuss shortcomings of existing approaches for measuring and evaluating elastic behavior and propose a novel benchmark methodology specifically designed for evaluating the elasticity aspects of modern cloud platforms. The benchmark is based on open workloads with realistic load variation profiles that are calibrated to induce identical resource demand variations independent of the underlying hardware performance. Furthermore, we propose new metrics that capture the accuracy of resource allocations and de-allocations, as well as the timing aspects of an auto-scaling mechanism explicitly.


international conference on performance engineering | 2014

LIMBO: a tool for modeling variable load intensities

Jóakim von Kistowski; Nikolas Herbst; Samuel Kounev

Modern software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. In this paper, we present LIMBO - an Eclipse-based tool for modeling variable load intensity profiles based on the Descartes Load Intensity Model as an underlying modeling formalism.


international conference on cloud computing | 2015

Proactive Memory Scaling of Virtualized Applications

Simon Spinner; Nikolas Herbst; Samuel Kounev; Xiaoyun Zhu; Lei Lu; Mustafa Uysal; Rean Griffith

Enterprise applications in virtualized environments are often subject to time-varying workloads with multiple seasonal patterns and trends. In order to ensure quality of service for such applications while avoiding over-provisioning, resources need to be dynamically adapted to accommodate the current workload demands. Many memory-intensive applications are not suitable for the traditional horizontal scaling approach often used for runtime performance management, as it relies on complex and expensive state replication. On the other hand, vertical scaling of memory often requires a restart of the application. In this paper, we propose a proactive approach to memory scaling for virtualized applications. It uses statistical forecasting to predict the future workload and reconfigure the memory size of the virtual machine of an application automatically. To this end, we propose an extended forecasting technique that leverages meta-knowledge, such as calendar information, to improve the forecast accuracy. In addition, we develop an application controller to adjust settings associated with application memory management during memory reconfiguration. Our evaluation using real-world traces shows that the forecast accuracy quantified with the MASE error metric can be improved by 11 - 59%. Furthermore, we demonstrate that the proactive approach can reduce the impact of reconfiguration on application availability by over 80% and significantly improve performance relative to a reactive controller.


international conference on performance engineering | 2017

Design and Evaluation of a Proactive, Application-Aware Auto-Scaler: Tutorial Paper

André Bauer; Nikolas Herbst; Samuel Kounev

Simple, threshold-based auto-scaling mechanisms as mainly used in practice bring no features to overcome resource provisioning delays and non-linear scalability of a software service. In this tutorial paper, we guide the reader step-by-step through the design and evaluation of a proactive and application-aware auto-scaling mechanism. First, we introduce the building blocks for such an auto-scaling mechanism: (i) an on-demand arrival rate forecasting method, (ii) resource demand estimates at run-time, (iii) a descriptive and continuously updated performance model of the deployed software and (iv) an intelligent adaptation planner that incorporates a threshold-based mechanism as fall-back. Second, we cover auto-scaler evaluation steps: (i) the preparation steps are an application scenario and workload profile definition and (ii) an automated scalability analysis. In step (iii), we show how representative and repeatable auto-scaler experiments can be conducted and (iv) the results analyzed with the help of elasticity and end-user metrics for a detailed and fair comparison of alternative auto-scaler mechanisms and their respective configurations even across platforms. For the individual steps of the construction of the auto-scaler building blocks and for their evaluation, we shortly introduce open-source tools available online Descartes Tools (http://descartes.tools/).


international conference on autonomic computing | 2017

Scalability Analysis of Cloud Software Services

Grunnar Brataas; Nikolas Herbst; Simon Ivansek; Jure Polutnik

Cloud computing theoretically offers its customers unlimited cloud resources. However, the scalability of software services is often limited by their underlying architecture. In contrast to current scalability analysis approaches, we make work parameters, quality thresholds, as well as the resource space explicit in a conceptually consistent set of equations. We propose two scalability metric functions based on these equations. The resource scalability metric function describes the relation between the capacity of the multi-tier cloud software service and its use of cloud resources, whereas the cost scalability metric function replaces cloud resources with cost. We validate using the Cloud-Store application. CloudStore follows the TPC-W specification, representing an online book store. We have experimented with 21 different public Amazon Web Service configurations and two private OpenStack configurations.


Proceedings of the 2nd International Workshop on Hot Topics in Cloud service Scalability | 2014

Optimization Method for Request Admission Control to Guarantee Performance Isolation

Rouven Krebs; Philipp Schneider; Nikolas Herbst

Software-as-a-Service (SaaS) often shares one single application instance among different tenants to reduce costs. However, sharing potentially leads to undesired influence from one tenant onto the performance observed by the others. Furthermore, providing one tenant additional resources to support its increasing demands without increasing the performance of tenants who do not pay for it is a major challenge. The application intentionally does not manage hardware resources, and the OS is not aware of application level entities like tenants. Thus, it is difficult to control the performance of different tenants to keep them isolated. These problems gain importance as performance is one of the major obstacles for cloud customers. Existing work applies request based admission control mechanisms like a weighted round robin with an individual queue for each tenant to control the share guaranteed for a tenant. However, the computation of the concrete weights for such an admission control is still challenging. In this paper, we present a fitness function and optimization approach reflecting various requirements from this field to compute proper weights with the goal to ensure an isolated performance as foundation to scale on a tenants basis.

Collaboration


Dive into the Nikolas Herbst's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandru Iosup

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Wert

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Koziolek

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Heger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Henning Groenda

Forschungszentrum Informatik

View shared research outputs
Researchain Logo
Decentralizing Knowledge