Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Stier is active.

Publication


Featured researches published by Christian Stier.


ieee international conference on cloud computing technology and science | 2014

The CACTOS Vision of Context-Aware Cloud Topology Optimization and Simulation

Per-Olov Östberg; Henning Groenda; Stefan Wesner; James Byrne; Dimitrios S. Nikolopoulos; Craig Sheridan; Jakub Krzywda; Ahmed Ali-Eldin; Johan Tordsson; Erik Elmroth; Christian Stier; Klaus Krogmann; Jörg Domaschka; Christopher B. Hauser; Peter J. Byrne; Sergej Svorobej; Barry McCollum; Zafeirios Papazachos; Darren Whigham; Stephan Ruth; Dragana Paurevic

Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.


european conference on software architecture | 2015

Model-Based Energy Efficiency Analysis of Software Architectures

Christian Stier; Anne Koziolek; Henning Groenda; Ralf H. Reussner

Design-time quality analysis of software architectures evaluates the impact of design decisions in quality dimensions such as performance. Architectural design decisions decisively impact the energy efficiency (EE) of software systems. Low EE not only results in higher operational cost due to power consumption. It indirectly necessitates additional capacity in the power distribution infrastructure of the target deployment environment. Methodologies that analyze EE of software systems are yet to reach an abstraction suited for architecture-level reasoning. This paper outlines a model-based approach for evaluating the EE of software architectures. First, we present a model that describes the central power consumption characteristics of a software system. We couple the model with an existing model-based performance prediction approach to evaluate the consumption characteristics of a software architecture in varying usage contexts. Several experiments show the accuracy of our architecture-level consumption predictions. Energy consumption predictions reach an error of less than 5.5% for stable and 3.7% for varying workloads. Finally, we present a round-trip design scenario that illustrates how the explicit consideration of EE supports software architects in making informed trade-off decisions between performance and EE.


international conference on performance engineering | 2017

An Expandable Extraction Framework for Architectural Performance Models

Jürgen Walter; Christian Stier; Heiko Koziolek; Samuel Kounev

Providing users with Quality of Service (QoS) guarantees and the prevention of performance problems are challenging tasks for software systems. Architectural performance models can be applied to explore performance properties of a software system at design time and run time. At design time, architectural performance models support reasoning on effects of design decisions. At run time, they enable automatic reconfigurations by reasoning on the effects of changing user behavior. In this paper, we present a framework for the extraction of architectural performance models based on monitoring log files generalizing over the targeted architectural modeling language. Using the presented framework, the creation of a performance model extraction tool for a specific modeling formalism requires only the implementation of a key set of object creation routines specific to the formalism. Our framework integrates them with extraction techniques that apply to many architectural performance models, e.g., resource demand estimation techniques. This lowers the effort to implement performance model extraction tools tremendously through a high level of reuse. We evaluate our framework presenting builders for the Descartes Modeling Language (DML) and the Palladio Component Model(PCM). For the extracted models we compare simulation results with measurements receiving accurate results.


statistical and scientific database management | 2012

Sensitivity of self-tuning histograms: query order affecting accuracy and robustness

Andranik Khachatryan; Emmanuel Müller; Christian Stier; Klemens Böhm

In scientific databases, the amount and the complexity of data calls for data summarization techniques. Such summaries are used to assist fast approximate query answering or query optimization. Histograms are a prominent class of model-free data summaries and are widely used in database systems. So-called self-tuning histograms look at query-execution results to refine themselves. An assumption with such histograms is that they can learn the dataset from scratch. We show that this is not the case and highlight a major challenge that stems from this. Traditional self-tuning is overly sensitive to the order of queries, and reaches only local optima with high estimation errors. We show that a self-tuning method can be improved significantly if it starts with a carefully chosen initial configuration. We propose initialization by subspace clusters in projections of the data. This improves both accuracy and robustness of self-tuning histograms.


IEEE Transactions on Knowledge and Data Engineering | 2015

Improving Accuracy and Robustness of Self-Tuning Histograms by Subspace Clustering

Andranik Khachatryan; Emmanuel Müller; Christian Stier; Klemens Böhm

In large databases, the amount and the complexity of the data calls for data summarization techniques. Such summaries are used to assist fast approximate query answering or query optimization. Histograms are a prominent class of model-free data summaries and are widely used in database systems. So-called self-tuning histograms look at query-execution results to refine themselves. An assumption with such histograms, which has not been questioned so far, is that they can learn the dataset from scratch, that is—starting with an empty bucket configuration. We show that this is not the case. Self-tuning methods are very sensitive to the initial configuration. Three major problems stem from this. Traditional self-tuning is unable to learn projections of multi-dimensional data, is sensitive to the order of queries, and reaches only local optima with high estimation errors. We show how to improve a self-tuning method significantly by starting with a carefully chosen initial configuration. We propose initialization by dense subspace clusters in projections of the data, which improves both accuracy and robustness of self-tuning. Our experiments on different datasets show that the error rate is typically halved compared to the uninitialized version.


ACM Transactions on Autonomous and Adaptive Systems | 2017

Modeling and Extracting Load Intensity Profiles

Jóakim von Kistowski; Nikolas Herbst; Samuel Kounev; Henning Groenda; Christian Stier; Sebastian Lehrig

Todays system developers and operators face the challenge of creating software systems that make efficient use of dynamically allocated resources under highly variable and dynamic load profiles, while at the same time delivering reliable performance. Benchmarking of systems under these constraints is difficult, as state-of-the-art benchmarking frameworks provide only limited support for emulating such dynamic and highly variable load profiles for the creation of realistic workload scenarios. Industrial benchmarks typically confine themselves to workloads with constant or stepwise increasing loads. Alternatively, they support replaying of recorded load traces. Statistical load intensity descriptions also do not sufficiently capture concrete pattern load profile variations over time. To address these issues, we present the Descartes Load Intensity Model (DLIM). DLIM provides a modeling formalism for describing load intensity variations over time. A DLIM instance can be used as a compact representation of a recorded load intensity trace, providing a powerful tool for benchmarking and performance analysis. As manually obtaining DLIM instances can be time consuming, we present three different automated extraction methods, which also help to enable autonomous system analysis for self-adaptive systems. Model expressiveness is validated using the presented extraction methods. Extracted DLIM instances exhibit a median modeling error of 12.4% on average over nine different real-world traces covering between two weeks and seven months. Additionally, extraction methods perform orders of magnitude faster than existing time series decomposition approaches.


quality of software architectures | 2016

Considering Transient Effects of Self-Adaptations in Model-Driven Performance Analyses

Christian Stier; Anne Koziolek

Model-driven performance engineering allows software architects to reason on performance characteristics of a software system in early design phases. In recent years, model-driven analysis techniques have been developed to evaluate performance characteristics of self-adaptive software systems. These techniques aim to reason on the ability of a self-adaptive software system to fulfill performance requirements in transient phases. A transient phase is the interval in which the behavior of the system changes, e.g., due to a burst in user requests. However, the effectiveness and efficiency with which a system is able to adapt depends not only on the time when it triggers adaptation actions but also on the time at which they are completed. Executing an adaptation action can cause additional stress on the adapted system. This can further impede the performance of the system in the transient phase. Model-driven analyses of self-adaptive software do not consider these transient effects. This paper outlines an approach for evaluating transient effects in model-driven analyses of self-adaptive software systems. The evaluation applied our approach to a horizontally scaling media hosting application in three experiments. By considering the delay in booting new Virtual Machines (VMs), we were able to improve the accuracy of predicted response times. The second and third experiment demonstrated that the increased accuracy enables an early detection and resolution of design deficiencies of self-adaptive software systems.


simulation tools and techniques for communications, networks and system | 2015

Towards automated data-driven model creation for cloud computing simulation

Sergej Svorobej; James Byrne; Paul Liston; Peter J. Byrne; Christian Stier; Henning Groenda; Zafeirios Papazachos; Dimitrios S. Nikolopoulos

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.


international conference on performance engineering | 2014

Modelling database lock-contention in architecture-level performance simulation

Philipp Merkle; Christian Stier

Databases are the origin of many performance problems found in transactional information systems. Performance suffers especially when databases employ locking to isolate concurrent transactions. Software performance models therefore need to reflect lock contention in order to be a credible source for guiding design decisions. We propose a hybrid simulation approach that integrates a novel locking model into the Palladio software architecture performance simulator. Our model operates on a row level and is tailored to be used with architecture-level performance models. An experimental evaluation leads to promising results close to the measured performance.


international conference on performance engineering | 2018

Rapid Testing of IaaS Resource Management Algorithms via Cloud Middleware Simulation

Christian Stier; Jörg Domaschka; Anne Koziolek; Sebastian Krach; Jakub Krzywda; Ralf H. Reussner

Infrastructure as a Service (IaaS) Cloud services allow users to deploy distributed applications in a virtualized environment without having to customize their applications to a specific Platform as a Service (PaaS) stack. It is common practice to host multiple Virtual Machines (VMs) on the same server to save resources. Traditionally, IaaS data center management required manual effort for optimization, e.g. by consolidating VM placement based on changes in usage patterns. Many resource management algorithms and frameworks have been developed to automate this process. Resource management algorithms are typically tested via experimentation or using simulation. The main drawback of both approaches is the high effort required to conduct the testing. Existing Cloud or IaaS simulators require the algorithm engineer to reimplement their algorithm against the simulators API. Furthermore, the engineer manually needs to define the workload model used for algorithm testing. We propose an approach for the simulative analysis of IaaS Cloud infrastructure that allows algorithm engineers and data center operators to evaluate optimization algorithms without investing additional effort to reimplement them in a simulation environment. By leveraging runtime monitoring data, we automatically construct the simulation models used to test the algorithms. Our validation shows that algorithm tests conducted using our IaaS Cloud simulator match the measured behavior on actual hardware.

Collaboration


Dive into the Christian Stier's collaboration.

Top Co-Authors

Avatar

Henning Groenda

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Byrne

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Koziolek

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge