Sebastian Lehrig
University of Paderborn
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sebastian Lehrig.
international conference on performance engineering | 2015
Matthias Becker; Sebastian Lehrig; Steffen Becker
In cloud computing, software architects develop systems for virtually unlimited resources that cloud providers account on a pay-per-use basis. Elasticity management systems provision these resources autonomously to deal with changing workload. Such changing workloads call for new objective metrics allowing architects to quantify quality properties like scalability, elasticity, and efficiency, e.g., for requirements/SLO engineering and software design analysis. In literature, initial metrics for these properties have been proposed. However, current metrics lack a systematic derivation and assume knowledge of implementation details like resource handling. Therefore, these metrics are inapplicable where such knowledge is unavailable. To cope with these lacks, this short paper derives metrics for scalability, elasticity, and efficiency properties of cloud computing systems using the goal question metric (GQM) method. Our derivation uses a running example that outlines characteristics of cloud computing systems. Eventually, this example allows us to set up a systematic GQM plan and to derive an initial set of six new metrics. We particularly show that our GQM plan allows to classify existing metrics.
quality of software architectures | 2015
Sebastian Lehrig; Hendrik Eikerling; Steffen Becker
Context In cloud computing, there is a multitude of definitions and metrics for scalability, elasticity, and efficiency. However, stakeholders have little guidance for choosing fitting definitions and metrics for these quality properties, thus leading to potential misunderstandings. For example, cloud consumers and providers cannot negotiate reliable and quantitative service level objectives directly understood by each stakeholder. Objectives Therefore, we examine existing definitions and metrics for these quality properties from the viewpoint of cloud consumers, cloud providers, and software architects with regard to commonly used concepts. Methods We execute a systematic literature review (SLR), reproducibly collecting common concepts in definitions and metrics for scalability, elasticity, and efficiency. As quality selection criteria, we assess whether existing literature differentiates the three properties, exemplifies metrics, and considers typical cloud characteristics and cloud roles. Results Our SLR yields 418 initial results from which we select 20 for in-depth evaluation based on our quality selection criteria. In our evaluation, we recommend concepts, definitions, and metrics for each property. Conclusions Software architects can use our recommendations to analyze the quality of cloud computing applications. Cloud providers and cloud consumers can specify service level objectives based on our metric suggestions.
international conference on performance engineering | 2013
Gunnar Brataas; Erlend Stav; Sebastian Lehrig; Steffen Becker; Goran Kopčak; Darko Huljenic
This work-in-progress paper introduces the EU FP7 STREP CloudScale. The contribution of this paper is an overall description of CloudScales engineering approach for the design and evolution of scalable cloud applications and services. An Electronic Health Record (EHR) system serves as a motivation scenario. The overall CloudScale method describes how CloudScale will identify and gradually solve scalability problems in this existing applications. CloudScale will also enable the modelling of design alternatives and the analysis of their effect on scalability and cost. Best practices for scalability will further guide the design process. The CloudScale method is supported by three integrated tools and a scalability description modelling language. CloudScale will be validated by two case studies.
ieee international conference on cloud computing technology and science | 2017
Mariano Cecowski; Steffen Becker; Sebastian Lehrig
Cloud computing focuses on elasticity, i.e., providing constant quality of service independent of workload. For achieving elasticity, cloud computing applications utilize virtualized infrastructures, distributed platforms, and other software-as-a-service offerings. The surge of cloud computing applications responds to the ability of cloud computing environments to only pay for utilized resources while saving upfront costs (e.g., buying and setting up infrastructure) and allowing for dynamic allocation of resources even in public-private hybrid scenarios.This chapter investigates the shift from classical three-tier web applications to such elastic cloud computing applications. After characterizing web applications, we describe cloud computing characteristics and derive how web applications can exploit these characteristics.Our results motivate novel requirements that have to be engineered and modeled, as further described in this chapter.
Archive | 2017
Steffen Becker; Gunnar Brataas; Sebastian Lehrig
When building IT systems today, developers face a set of challenges unknown a few years ago. Systems have to operate in a much more dynamic world, with users coming and going in an unpredictable manner. User counts have exceeded the limit of billions of users, and the Internet of Things will even increase those numbers significantly. Hence, building scalable systems which can cope with their dynamic environment has become a major success factor for most IT service providers. Those systems are run on a vast amount of hardware and software resources offered by cloud providers. Therefore, this chapter gives an introduction into the world of cloud computing applications, the terminology and concepts used in this world, and the challenges developers face when building scalable cloud applications. Afterward, we outline our solution for engineering cloud computing applications on a very high level to give the reader a jump-start into the topic. This chapter is structured as follows. In Sect. 1.1 we sketch the world of cloud applications and motivate the need for engineering their scalability. For those who have not worked on a cloud system, we outline its characteristics in Sect. 1.2 and define its essential concepts in Sect. 1.3. As this book is about building scalable S. Becker ( ) University of Stuttgart, Universitätsstraße 38, 70569 Stuttgart, Germany e-mail: [email protected] G. Brataas SINTEF Digital, Strindvegen 4, 7034 Trondheim, Norway e-mail: [email protected] M. Cecowski XLAB d.o.o., Pot za Brdom 100, 1000 Ljubljana, Slovenia e-mail: [email protected] D. Huljenić • I. Stupar Ericsson Nikola Tesla, Krapinska 45, 10000 Zagreb, Croatia e-mail: [email protected]; [email protected] S. Lehrig IBM Research, Technology Campus, Damastown Industrial Estate, Dublin 15, Ireland e-mail: [email protected]
quality of software architectures | 2016
Sebastian Lehrig; Steffen Becker
Context: Performance models allow software architects to conduct what-if analyses, e.g., to assess deployment scenarios regarding performance. While a typical scenario is the redeployment to Infrastructure-as-a-Service (IaaS) environments, there is currently no empirical evidence that architects can apply performance models in such scenarios for accurate performance analyses and how much effort is required. Objectives: Therefore, we explore the applicability of software performance engineering for planning the redeployment of existing software applications to IaaS environments. Methods: We conduct a case study in which we apply performance engineering to redeploy a realistic existing application to IaaS environments. We select an online book shop implementation (CloudStore) as existing application and engineer a corresponding Palladio performance model. Subsequently, we compare analysis results with measurements gathered from operating CloudStore within (I) a classical on-premise setup, (II) OpenStack, and (III) Amazon EC2. Results: Our case study shows that performance models have a relative accuracy error of less than 12% even for IaaS environments (scenarios (II) and (III)). For scenarios (II) and (III), we saved up to 98% model creation effort by reusing the model from scenario (I), we just re-calibrated processing rates of CPUs within our deployment model. Conclusions: Software architects can plan redeployments by reusing performance models of their existing system, thus, with only minor effort. This is particularly possible for virtualized third-party environments like for Amazon EC2.
Future Generation Computer Systems | 2018
Sebastian Lehrig; Richard Torbjørn Sanders; Gunnar Brataas; Mariano Cecowski; Simon Ivansek; Jure Polutnik
Abstract This paper describes CloudStore, an open source application that lends itself to analyzing key characteristics of Cloud computing platforms. Based on an earlier standard from transaction processing, it represents a simplified version of a typical e-commerce application–an electronic book store. We detail how a deployment on a popular public cloud offering can be instrumented to gain insight into system characteristics such as capacity, scalability, elasticity and efficiency. Based on our insights, we create a CloudStore performance model, allowing to accurately predict such properties already at design time.
ACM Transactions on Autonomous and Adaptive Systems | 2017
Jóakim von Kistowski; Nikolas Herbst; Samuel Kounev; Henning Groenda; Christian Stier; Sebastian Lehrig
Todays system developers and operators face the challenge of creating software systems that make efficient use of dynamically allocated resources under highly variable and dynamic load profiles, while at the same time delivering reliable performance. Benchmarking of systems under these constraints is difficult, as state-of-the-art benchmarking frameworks provide only limited support for emulating such dynamic and highly variable load profiles for the creation of realistic workload scenarios. Industrial benchmarks typically confine themselves to workloads with constant or stepwise increasing loads. Alternatively, they support replaying of recorded load traces. Statistical load intensity descriptions also do not sufficiently capture concrete pattern load profile variations over time. To address these issues, we present the Descartes Load Intensity Model (DLIM). DLIM provides a modeling formalism for describing load intensity variations over time. A DLIM instance can be used as a compact representation of a recorded load intensity trace, providing a powerful tool for benchmarking and performance analysis. As manually obtaining DLIM instances can be time consuming, we present three different automated extraction methods, which also help to enable autonomous system analysis for self-adaptive systems. Model expressiveness is validated using the presented extraction methods. Extracted DLIM instances exhibit a median modeling error of 12.4% on average over nine different real-world traces covering between two weeks and seven months. Additionally, extraction methods perform orders of magnitude faster than existing time series decomposition approaches.
international conference on performance engineering | 2015
Sebastian Lehrig; Steffen Becker
In cloud computing, software engineers design systems for virtually unlimited resources that cloud providers account on a pay-per-use basis. Elasticity management systems provision these resource autonomously to deal with changing workloads. Such workloads call for new objective metrics allowing engineers to quantify quality properties like scalability, elasticity, and efficiency. However, software engineers currently lack engineering methods that aid them in engineering their software regarding such properties. Therefore, the CloudScale project developed tools for such engineering tasks. These tools cover reverse engineering of architectural models from source code, editors for manual design/adaption of such models, as well as tools for the analysis of modeled and operating software regarding scalability, elasticity, and efficiency. All tools are interconnected via ScaleDL, a common architectural language, and the CloudScale Method that leads through the engineering process. In this tutorial, we execute our method step-by-step such that every tool and ScaleDL are briefly introduced.
Proceedings of the 1st International Workshop on Future of Software Architecture Design Assistants | 2015
Sebastian Lehrig; Steffen Becker
Software architects use so-called software architecture design assistants to get tool-based, (semi-)automated support in engineering software systems. Compared to manual engineering, the main promise of such a support is that architects can create high-quality architectural designs more efficiently. Yet, current practice in evaluating whether this promise is kept is based on case studies conducted by the original authors of respective design assistants. The downside of such evaluations is that they are neither generalizable to thirdparty software architects nor can be used for quantitative efficiency comparisons between competing design assistants. To tackle this problem, we investigate how researchers can apply controlled experiments for evaluating the impact of software architecture design assistants on the efficiency of architects. For our investigation, we survey related controlled experiments. Based on this survey, we derive lessons learned in terms of best practices and challenges for such experiments.