Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Hauck is active.

Publication


Featured researches published by Michael Hauck.


software engineering and advanced applications | 2010

The Performance Cockpit Approach: A Framework For Systematic Performance Evaluations

Dennis Westermann; Jens Happe; Michael Hauck; Christian Heupel

Evaluating the performance (timing behavior, throughput, and resource utilization) of a software system becomes more and more challenging as today’s enterprise applications are built on a large basis of existing software (e.g. middleware, legacy applications, and third party services). As the performance of a system is affected by multiple factors on each layer of the system, performance analysts require detailed knowledge about the system under test and have to deal with a huge number of tools for benchmarking, monitoring, and analyzing. In practice, performance analysts try to handle the complexity by focusing on certain aspects, tools, or technologies. However, these isolated solutions are inefficient due to the small reuse and knowledge sharing. The Performance Cockpit presented in this paper is a framework that encapsulates knowledge about performance engineering, the system under test, and analyses in a single application by providing a flexible, plug-in based architecture. We demonstrate the value of the framework by means of two different case studies.


conference on software maintenance and reengineering | 2010

Reverse Engineering Component Models for Quality Predictions

Steffen Becker; Michael Hauck; Mircea Trifu; Klaus Krogmann; Jan Kofron

Legacy applications are still widely spread. If a need to change deployment or update its functionality arises, it becomes difficult to estimate the performance impact of such modifications due to absence of corresponding models. In this paper, we present an extendable integrated environment based on Eclipse developed in the scope of the Q-Impress project for reverse engineering of legacy applications (in C/C++/Java). The Q-Impress project aims at modeling quality attributes (performance, reliability, maintainability) at an architectural level and allows for choosing the most suitable variant for implementation of a desired modification. The main contributions of the project include i) a high integration of all steps of the entire process into a single tool, a beta version of which has been already successfully tested on a case study, ii) integration of multiple research approaches to performance modeling, and iii) an extendable underlying meta-model for different quality dimensions.


component based software engineering | 2009

Modelling Layered Component Execution Environments for Performance Prediction

Michael Hauck; Michael Kuperberg; Klaus Krogmann; Ralf H. Reussner

Software architects often use model-based techniques to analyse performance (e.g. response times), reliability and other extra-functional properties of software systems. These techniques operate on models of software architecture and execution environment, and are applied at design time for early evaluation of design alternatives, especially to avoid implementing systems with insufficient quality. Virtualisation (such as operating system hypervisors or virtual machines) and multiple layers in execution environments (e.g. RAID disk array controllers on top of hard disks) are becoming increasingly popular in reality and need to be reflected in the models of execution environments. However, current component meta-models do not support virtualisation and cannot model individual layers of execution environments. This means that the entire monolithic model must be recreated when different implementations of a layer must be compared to make a design decision, e.g. when comparing different Java Virtual Machines. In this paper, we present an extension of an established model-based performance prediction approach and associated tools which allow to model and predict state-of-the-art layered execution environments, such as disk arrays, virtual machines, and application servers. The evaluation of the presented approach shows its applicability and the resulting accuracy of the performance prediction while respecting the structure of the modelled resource environment.


international conference on cloud computing and services science | 2012

A Method for Experimental Analysis and Modeling of Virtualization Performance Overhead

Nikolaus Huber; Marcel von Quast; Fabian Brosig; Michael Hauck; Samuel Kounev

Nowadays, virtualization solutions are gaining increasing importance. By enabling the sharing of physical resources, thus making resource usage more efficient, they promise energy and cost savings. Additionally, virtualization is the key enabling technology for cloud computing and server consolidation. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present a methodology to quantify the influence of the identified performance-relevant factors based on an empirical approach using benchmarks. We show experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.


quality of software architectures | 2011

Ginpex: deriving performance-relevant infrastructure properties through goal-oriented experiments

Michael Hauck; Michael Kuperberg; Nikolaus Huber; Ralf H. Reussner

In software performance engineering, the infrastructure on which an application is running plays a crucial role when predicting the performance of the application. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for detecting and quantifying performance relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with pre-defined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using two case studies, where experiments are executed to detect the operating system scheduler timeslice length, and to quantify the CPU virtualization overhead for an application executed in a virtualized environment.


Empirical Software Engineering | 2013

Performance and reliability prediction for evolving service-oriented software systems

Heiko Koziolek; Bastian Schlich; Steffen Becker; Michael Hauck

During software system evolution, software architects intuitively trade off the different architecture alternatives for their extra-functional properties, such as performance, maintainability, reliability, security, and usability. Researchers have proposed numerous model-driven prediction methods based on queuing networks or Petri nets, which claim to be more cost-effective and less error-prone than current practice. Practitioners are reluctant to apply these methods because of the unknown prediction accuracy and work effort. We have applied a novel model-driven prediction method called Q-ImPrESS on a large-scale process control system from ABB consisting of several million lines of code. This paper reports on the achieved performance prediction accuracy and reliability prediction sensitivity analyses as well as the effort in person hours for achieving these results.


quantitative evaluation of systems | 2010

A Prediction Model for Software Performance in Symmetric Multiprocessing Environments

Jens Happe; Henning Groenda; Michael Hauck; Ralf H. Reussner

The broad introduction of multi-core processors made symmetric multiprocessing (SMP) environments mainstream. The additional cores can significantly increase software performance. However, their actual benefit depends on the operating system schedulers capabilities, the systems workload, and the softwares degree of concurrency. The load distribution on the available processors (or cores) strongly influences response times and throughput of software applications. Hence, understanding the operating system schedulers influence on performance and scalability is essential for the accurate prediction of software performance (response time, throughput, and resource utilisation). Existing prediction approaches tend to approximate the influence of operating system schedulers by abstract policies such as processor sharing and its more sophisticated extensions. However, these abstractions often fail to accurately capture software performance in SMP environments. In this paper, we present a performance Model for general-purpose Operating System Schedulers (MOSS). It allows analyses of software performance taking the influences of schedulers in SMP environments into account. The model is defined in terms of timed Coloured Petri Nets and predicts the effect of different operating system schedulers (e.g., Windows 7, Vista, Server 2003, and Linux 2.6) on software performance. We validated the prediction accuracy of MOSS in a case study using a business information system. In our experiments, the deviation of predictions and measurements was below 10% in most cases and did not exceed 30%.


modeling, analysis, and simulation on computer and telecommunication systems | 2010

Automatic Derivation of Performance Prediction Models for Load-balancing Properties Based on Goal-oriented Measurements

Michael Hauck; Jens Happe; Ralf H. Reussner

In symmetric multiprocessing environments, the performance of a software system heavily depends on the applications parallelism, the scheduling and load-balancing policies of the operating system, and the infrastructure it is running on. The scheduling of tasks can influence the response time of an application by several orders of magnitude. Thus, detailed models of the operating system scheduler are essential for accurate performance predictions. However, building such models for schedulers and including them into performance prediction models involves a lot of effort. For this reason, simplified scheduler models are used for the performance evaluation of business information systems in general. In this work, we present an approach to derive load-balancing properties of general-purpose operating system (GPOS) schedulers automatically. Our approach uses goal-oriented measurements to derive performance models based on observations. Furthermore, the derived performance model is plugged into the Palladio Component Model (PCM), a model-based performance prediction approach. We validated the applicability of the approach and its prediction accuracy in a case study on different operating systems.


Journal of Systems and Software | 2016

Integration of a preemptive priority based scheduler in the Palladio Workbench

Javier Fernández-Salgado; Pablo Parra; Michael Hauck; Agustín M. Hellín; Sebastián Sánchez-Prieto; Klaus Krogmann; Óscar R. Polo

We present an extension of Palladio Component Model for embedded systems.A set of patterns has been defined to model real-time systems in Palladio.A set of validation tests is provided to prove the correctness of the solution.A complete use case based on a real system is provided. This paper presents an extension to the Palladio Component Model (PCM), together with a new performance analysis infrastructure that supports the fixed-priority preemptive scheduling policy. The proposed solution allows modelling and analysing component-based embedded software applications that are defined using a specific pattern in which each component is executed by a task with a specific priority. The infrastructure is also capable of analysing the system performance when the tasks access shared resources, using either immediate priority ceiling, or priority inheritance protocols, in order to avoid the priority inversion problem. The paper shows the set of rules that enable the transformation between an application, compliant with the proposed design pattern, and its corresponding PCM. Finally, a use case example based on a real system, and a set of tests that validates the analysis infrastructure, are provided. This system is the on-board software of a satellite payload that is currently being developed by the Space Research Group of the University of Alcala. This software is in charge of managing the Instrument Control Unit of the Energetic Particle Detector, which will be launched as part of the Solar Orbiter mission of the European Space Agency and United States National Aeronautics and Space Administration (NASA).


international conference on cloud computing and services science | 2011

EVALUATING AND MODELING VIRTUALIZATION PERFORMANCE OVERHEAD FOR CLOUD ENVIRONMENTS

Nikolaus Huber; Marcel von Quast; Michael Hauck; Samuel Kounev

Collaboration


Dive into the Michael Hauck's collaboration.

Top Co-Authors

Avatar

Ralf H. Reussner

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jens Happe

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Huber

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Klaus Krogmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Kuperberg

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcel von Quast

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Heupel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fabian Brosig

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge