Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Spinner is active.

Publication


Featured researches published by Simon Spinner.


Performance Evaluation | 2015

Evaluating Approaches to Resource Demand Estimation

Simon Spinner; Giuliano Casale; Fabian Brosig; Samuel Kounev

Abstract Resource demands are a key parameter of stochastic performance models that needs to be determined when performing a quantitative performance analysis of a system. However, the direct measurement of resource demands is not feasible in most realistic systems. Therefore, statistical approaches that estimate resource demands based on coarse-grained monitoring data (e.g., CPU utilization, and response times) have been proposed in the literature. These approaches have different assumptions and characteristics that need to be considered when estimating resource demands. This paper surveys the state-of-the-art in resource demand estimation and proposes a classification scheme for estimation approaches. Furthermore, it contains an experimental evaluation comparing the impact of different factors (monitoring window size, number of workload classes, load level, collinearity, and model mismatch) on the estimation accuracy of seven different approaches. The classification scheme and the experimental comparison helps performance engineers to select an approach to resource demand estimation that fulfills the requirements of a given analysis scenario.


self-adaptive and self-organizing systems | 2014

Runtime Vertical Scaling of Virtualized Applications via Online Model Estimation

Simon Spinner; Samuel Kounev; Xiaoyun Zhu; Lei Lu; Mustafa Uysal; Anne Holler; Rean Griffith

Applications in virtualized data centers are often subject to Service Level Objectives (SLOs) regarding their performance (e.g., latency or throughput). In order to fulfill these SLOs, it is necessary to allocate sufficient resources of different types (CPU, memory, I/O, etc.) to an application. However, the relationship between the application performance and the resource allocation is complex and depends on multiple factors including application architecture, system configuration, and workload demands. In this paper, we present a model-based approach to ensure that the application performance meets the user-defined SLO efficiently by runtime vertical scaling (i.e., adding or removing resources) of individual virtual machines (VMs) running the application. A layered performance model describing the relationship between the resource allocation and the observed application performance is automatically extracted and updated online using resource demand estimation techniques. Such a model is then used in a feedback controller to dynamically adapt the number of virtual CPUs of individual VMs. We have implemented the controller on top of the VMware vSphere platform and evaluated it in a case study using a real-world email and groupware server. The experimental results show that our approach allows the managed application to achieve SLO satisfaction in spite of workload demand variation while avoiding oscillations commonly observed with state-of-the-art threshold-based controllers.


IEEE Transactions on Software Engineering | 2017

Model-Based Self-Aware Performance and Resource Management Using the Descartes Modeling Language

Nikolaus Huber; Fabian Brosig; Simon Spinner; Samuel Kounev; Manuel Bähr

Modern IT systems have increasingly distributed and dynamic architectures providing flexibility to adapt to changes in the environment and thus enabling higher resource efficiency. However, these benefits come at the cost of higher system complexity and dynamics. Thus, engineering systems that manage their end-to-end application performance and resource efficiency in an autonomic manner is a challenge. In this article, we present a holistic model-based approach for self-aware performance and resource management leveraging the Descartes Modeling Language (DML), an architecture-level modeling language for online performance and resource management. We propose a novel online performance prediction process that dynamically tailors the model solving depending on the requirements regarding accuracy and overhead. Using these prediction capabilities, we implement a generic model-based control loop for proactive system adaptation. We evaluate our model-based approach in the context of two representative case studies showing that with the proposed methods, significant resource efficiency gains can be achieved while maintaining performance requirements. These results represent the first end-to-end validation of our approach, demonstrating its potential for self-aware performance and resource management in the context of modern IT systems and infrastructures.


international conference on cloud computing | 2015

Proactive Memory Scaling of Virtualized Applications

Simon Spinner; Nikolas Herbst; Samuel Kounev; Xiaoyun Zhu; Lei Lu; Mustafa Uysal; Rean Griffith

Enterprise applications in virtualized environments are often subject to time-varying workloads with multiple seasonal patterns and trends. In order to ensure quality of service for such applications while avoiding over-provisioning, resources need to be dynamically adapted to accommodate the current workload demands. Many memory-intensive applications are not suitable for the traditional horizontal scaling approach often used for runtime performance management, as it relies on complex and expensive state replication. On the other hand, vertical scaling of memory often requires a restart of the application. In this paper, we propose a proactive approach to memory scaling for virtualized applications. It uses statistical forecasting to predict the future workload and reconfigure the memory size of the virtual machine of an application automatically. To this end, we propose an extended forecasting technique that leverages meta-knowledge, such as calendar information, to improve the forecast accuracy. In addition, we develop an application controller to adjust settings associated with application memory management during memory reconfiguration. Our evaluation using real-world traces shows that the forecast accuracy quantified with the MASE error metric can be improved by 11 - 59%. Furthermore, we demonstrate that the proactive approach can reduce the impact of reconfiguration on application availability by over 80% and significantly improve performance relative to a reactive controller.


international conference on performance engineering | 2016

A Reference Architecture for Online Performance Model Extraction in Virtualized Environments

Simon Spinner; Jürgen Walter; Samuel Kounev

Performance models can support decisions throughout the life-cycle of a software system. However, the manual construction of such performance models is a complex and time-consuming task requiring deep system knowledge. Therefore, automatic approaches for creating and updating performance models of a running system are necessary. Existing work focuses on single aspects of model extraction or proposes approaches specifically designed for a certain technology stack. In virtualized environments, we often see different applications based on diverse technology stacks sharing the same infrastructure. In order to enable online performance model extraction in such environments, we describe a new reference architecture for integrating different specialized model extraction solutions.


simulation tools and techniques for communications, networks and system | 2015

Parallel simulation of queueing petri nets

Jürgen Walter; Simon Spinner; Samuel Kounev

Queueing Petri Nets (QPNs) are a powerful formalism to model the performance of software systems. Such models can be solved using analytical or simulation techniques. Analytical techniques suffer from scalability issues, whereas simulation techniques often require very long simulation runs. Existing simulation techniques for QPNs are strictly sequential and cannot exploit the parallelism provided by modern multi-core processors. In this paper, we present an approach to parallel discrete-event simulation of QPNs using a conservative synchronization algorithm. We consider the spatial decomposition of QPNs as well as the lookahead calculation for different scheduling strategies. Additionally, we propose techniques to reduce the synchronization overhead when simulating performance models describing systems with open workloads. The approach is evaluated in three case studies using performance models of real-world software systems. We observe speedups between 1.9 and 2.5 for these case studies. We also assessed the maximum speedup that can be achieved with our approach using synthetic models.


Electronic Notes in Theoretical Computer Science | 2016

Enabling Fluid Analysis for Queueing Petri Nets via Model Transformation

Christoph Müller; Piotr Rygielski; Simon Spinner; Samuel Kounev

Abstract Due to the growing size of modern IT systems, their performance analysis becomes an even more challenging task. Existing simulators are unable to analyze the behavior of large systems in a reasonable time, whereas analytical methods suffer from the state space explosion problem. Fluid analysis techniques can be used to approximate the solution of high-order Markov chain models enabling time efficient analysis of large performance models. In this paper, we describe a model-to-model transformation from queueing Petri nets (QPN) into layered queueing networks (LQN). Obtained LQN models can benefit from three existing solvers: LINE, LQNS, LQSIM. LINE internally utilize fluid limits approximation to speed up the solving process for large models. We present the incentives for developing the automated model-to-model transformation and present a systematic approach that we followed in its design. We demonstrate the transformations using representative examples. Finally, we evaluate and compare the performance predictions of existing analytical, simulation and fluid analysis solvers. We analyze solvers limitations, solving time, and memory consumption.


Self-Aware Computing Systems; (2017) | 2017

Run-Time Models for Online Performance and Resource Management in Data Centers

Simon Spinner; Antonio Filieri; Samuel Kounev; Martina Maggio; Anders Robertsson

In this chapter, we introduce run-time models that a system may use for self-aware performance and resource management during operation. We focus on models that have been successfully used at run-time by a system itself or a system controller to reason about resource allocations and performance management in an online setting. This chapter provides an overview of existing classes of run-time models, including statistical regression models, queueing networks, control-theoretical models, and descriptive models. This chapter contributes to the state of the art, by creating a classification scheme, which we use to compare the different run-time model types. The aim of the scheme is to deepen the knowledge about the purpose, assumptions, and structure of each model class. We describe in detail two modeling case studies chosen because they are considered to be representative for a specific class of models. The description shows how these models can be used in a self-aware system for performance and resource management.


Self-Aware Computing Systems | 2017

Online Learning of Run-Time Models for Performance and Resource Management in Data Centers

Jürgen Walter; Antinisca Di Marco; Simon Spinner; Paola Inverardi; Samuel Kounev

In this chapter, we explain how to extract and learn run-time models that a system can use for self-aware performance and resource management in data centers. We abstract from concrete formalisms and identify extraction aspects relevant to performance models. We categorize the learning aspects into: (i) model structure, (ii) model parametrization (estimation and calibration of model parameters), and (iii) model adaptation options (change point detection and run-time reconfiguration). The chapter identifies alternative approaches for the respective model aspects. The type and granularity of each aspect depend on the characteristic of the concrete performance models.


Self-Aware Computing Systems; (2017) | 2017

Reference Scenarios for Self-aware Computing

Jeffrey O. Kephart; Martina Maggio; Ada Diaconescu; Holger Giese; Henry Hoffmann; Samuel Kounev; Anne Koziolek; Peter R. Lewis; Anders Robertsson; Simon Spinner

This chapter defines three reference scenarios to which other chapters may refer for the purpose of motivating and illustrating architectures, techniques, and methods consistently throughout the book. The reference scenarios cover a broad set of characteristics and issues that one may encounter in self-aware systems and represent a range of domains and a variety of scales and levels of complexity. The first scenario focuses on an adaptive sorting algorithm and exemplifies how a self-aware system may adapt to changes in the data on which it operates, the environment in which it executes, or the requirements or performance criteria to which it manages itself. The second focuses on self-aware multiagent applications running in a data center environment, allowing issues of collective behavior in cooperative and competitive self-aware systems to come to the fore. The third focuses on a cyber-physical system. It allows us to explore many of the same issues of system-level self-awareness that appear in the second scenario, but in a different context and at a potentially even larger (potentially planetary) scale, when there is no one clear global objective.

Collaboration


Dive into the Simon Spinner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Koziolek

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Wert

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Heger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fabian Brosig

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge