Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Kuperberg is active.

Publication


Featured researches published by Michael Kuperberg.


IEEE Transactions on Software Engineering | 2010

Using Genetic Search for Reverse Engineering of Parametric Behavior Models for Performance Prediction

Klaus Krogmann; Michael Kuperberg; Ralf H. Reussner

In component-based software engineering, existing components are often reused in new applications. Correspondingly, the response time of an entire component-based application can be predicted from the execution durations of individual component services. These execution durations depend on the runtime behavior of a component which itself is influenced by three factors: the execution platform, the usage profile, and the component wiring. To cover all relevant combinations of these influencing factors, conventional prediction of response times requires repeated deployment and measurements of component services for all such combinations, incurring a substantial effort. This paper presents a novel comprehensive approach for reverse engineering and performance prediction of components. In it, genetic programming is utilized for reconstructing a behavior model from monitoring data, runtime bytecode counts, and static bytecode analysis. The resulting behavior model is parameterized over all three performance-influencing factors, which are specified separately. This results in significantly fewer measurements: The behavior model is reconstructed only once per component service, and one application-independent bytecode benchmark run is sufficient to characterize an execution platform. To predict the execution durations for a concrete platform, our approach combines the behavior model with platform-specific benchmarking results. We validate our approach by predicting the performance of a file sharing application.


component based software engineering | 2008

Performance Prediction for Black-Box Components Using Reengineered Parametric Behaviour Models

Michael Kuperberg; Klaus Krogmann; Ralf H. Reussner

In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.


international conference on data mining | 2004

Metric incremental clustering of nominal data

Dan A. Simovici; Namita Singla; Michael Kuperberg

We present an algorithm/or clustering nominal data that is based on a metric on the set of partitions of a finite set of objects; this metric is defined starting from a lower valuation of the lattice of partitions. The proposed algorithm seeks to determine a clustering partition such that the total distance between this partition and the partitions determined by the attributes of the objects has a local minimum. The resulting clustering is quite stable relative to the ordering of the objects.


component based software engineering | 2009

Modelling Layered Component Execution Environments for Performance Prediction

Michael Hauck; Michael Kuperberg; Klaus Krogmann; Ralf H. Reussner

Software architects often use model-based techniques to analyse performance (e.g. response times), reliability and other extra-functional properties of software systems. These techniques operate on models of software architecture and execution environment, and are applied at design time for early evaluation of design alternatives, especially to avoid implementing systems with insufficient quality. Virtualisation (such as operating system hypervisors or virtual machines) and multiple layers in execution environments (e.g. RAID disk array controllers on top of hard disks) are becoming increasingly popular in reality and need to be reflected in the models of execution environments. However, current component meta-models do not support virtualisation and cannot model individual layers of execution environments. This means that the entire monolithic model must be recreated when different implementations of a layer must be compared to make a design decision, e.g. when comparing different Java Virtual Machines. In this paper, we present an extension of an established model-based performance prediction approach and associated tools which allow to model and predict state-of-the-art layered execution environments, such as disk arrays, virtual machines, and application servers. The evaluation of the presented approach shows its applicability and the resulting accuracy of the performance prediction while respecting the structure of the modelled resource environment.


quantitative evaluation of systems | 2009

TimerMeter: Quantifying Properties of Software Timers for System Analysis

Michael Kuperberg; Martin Krogmann; Ralf H. Reussner

To analyse runtime behaviour and performance of software systems, accurate time measurements are obtained using timer methods. The underlying hardware timers and counters are read and processed by several software layers, which introduce overhead and delays that impact accuracy and statistical validity of fine-granular measurements. To understand and to control these impacts, the resulting accuracy of timer methods and their invocation costs must be quantified. However, quantitative properties of timer methods are usually not specified as they are platform-specific due to differences in hardware, operating systems and virtual machines. Also, no algorithm exists for precisely quantifying the timer methods’ properties, so programmers have to work with coarse estimates and cannot evaluate and compare different timer methods and timer APIs. In this paper, we present TimerMeter, a novel algorithm for platform-independent quantification of accuracy and invocation cost of any timer methods, without inspecting their implementation. We evaluate our approach on timer methods provided by the Java platform API, and compare them to additional timer methods that access hardware and software timers from Java. The presented algorithm and the evaluation results benefit researchers and programmers by forming a basis for selecting appropriate timers.


quality of software architectures | 2011

Ginpex: deriving performance-relevant infrastructure properties through goal-oriented experiments

Michael Hauck; Michael Kuperberg; Nikolaus Huber; Ralf H. Reussner

In software performance engineering, the infrastructure on which an application is running plays a crucial role when predicting the performance of the application. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for detecting and quantifying performance relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with pre-defined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using two case studies, where experiments are executed to detect the operating system scheduler timeslice length, and to quantify the CPU virtualization overhead for an application executed in a virtualized environment.


international conference on performance engineering | 2011

Analysing the fidelity of measurements performed with hardware performance counters

Michael Kuperberg; Ralf H. Reussner

Performance evaluation requires accurate and dependable measurements of timing values. Such measurements are usually made using timer methods, but these methods are often too coarse-grained and too inaccurate. Thus, direct usage of hardware performance counters is frequently used for fine-granular measurements due to higher accuracy. However, direct access to these counters may be misleading on multicore computers because cores can be paused or core affinity changed by the operating system, resulting in misleading counter values. The contribution of this paper is the demonstration of an additional, significant flaw arising from the direct use of hardware performance counters. We demonstrate that using JNI and assembler instructions to access the Timestamp Counter from Java applications can result in grossly wrong values, even in single-threaded scenarios.


Electronic Notes in Theoretical Computer Science | 2009

Using Heuristics to Automate Parameter Generation for Benchmarking of Java Methods

Michael Kuperberg; Fouad ben Nasr Omri

Automated generation of method parameters is needed in benchmarking scenarios where manual or random generation of parameters are not suitable, do not scale or are too costly. However, for a method to execute correctly, the generated input parameters must not violate implicit semantical constraints, such as ranges of numeric parameters or the maximum length of a collection. For most methods, such constraints have no formal documentation, and human-readable documentation of them is usually incomplete and ambiguous. Random search of appropriate parameter values is possible but extremely ineffective and does not pay respect to such implicit constraints. Also, the role of polymorphism and of the method invocation targets is often not taken into account. Most existing approaches that claim automation focus on a single method and ignore the structure of the surrounding APIs where those exist. In this paper, we present HeuriGenJ, a novel heuristics-based approach for automatically finding legal and appropriate method input parameters and invocation targets, by approximating the implicit constraints imposed on them. Our approach is designed to support systematic benchmarking of API methods written in the Java language. We evaluate the presented approach by applying it to two frequently-used packages of the Java platform API, and demonstrating its coverage and effectiveness.


international conference on performance engineering | 2011

Metric-based selection of timer methods for accurate measurements

Michael Kuperberg; Martin Krogmann; Ralf H. Reussner

Performance measurements are often concerned with accurate recording of timing values, which requires timer methods of high quality. Evaluating the quality of a given timer method or performance counter involves analysing several properties, such as accuracy, invocation cost and timer stability. These properties are metrics with platform-dependent values, and ranking and selecting timer methods requires comparisons using multidimensional metric sets, which make the comparisons ambiguous and unnecessary complex. To solve this problem, this paper proposes a new unified metric that allows for a simpler comparison. The one-dimensional metric is designed to capture fine-granular differences between timer methods, and normalises accuracy and other quality attributes by using CPU cycles instead of time units. The proposed metric is evaluated on all timer methods provided by Java and .NET platform APIs.


ACM Sigsoft Software Engineering Notes | 2011

Abstract only: metric-based selection of timer methods for accurate measurements

Michael Kuperberg; Martin Krogmann; Ralf H. Reussner

Performance measurements are often concerned with accurate recording of timing values, which requires timer methods of high quality. Evaluating the quality of a given timer method or performance counter involves analysing several properties, such as accuracy, invocation cost and timer stability. These properties are metrics with platform-dependent values, and ranking and selecting timer methods requires comparisons using multidimensional metric sets, which make the comparisons ambiguous and unnecessary complex. To solve this problem, this paper proposes a new unified metric that allows for a simpler comparison. The one-dimensional metric is designed to capture fine-granular differences between timer methods, and normalises accuracy and other quality attributes by using CPU cycles instead of time units. The proposed metric is evaluated on all timer methods provided by Java and .NET platform APIs.

Collaboration


Dive into the Michael Kuperberg's collaboration.

Top Co-Authors

Avatar

Ralf H. Reussner

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Krogmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Klaus Krogmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Hauck

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar

Fouad ben Nasr Omri

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Huber

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anne Martens

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jens Happe

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge