Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Wert is active.

Publication


Featured researches published by Alexander Wert.


international conference on software engineering | 2013

Supporting swift reaction: automatically uncovering performance problems by systematic experiments

Alexander Wert; Jens Happe; Lucia Happe

Performance problems pose a significant risk to software vendors. If left undetected, they can lead to lost customers, increased operational costs, and damaged reputation. Despite all efforts, software engineers cannot fully prevent performance problems being introduced into an application. Detecting and resolving such problems as early as possible with minimal effort is still an open challenge in software performance engineering. In this paper, we present a novel approach for Performance Problem Diagnostics (PPD) that systematically searches for well-known performance problems (also called performance antipatterns) within an application. PPD automatically isolates the problems root cause, hence facilitating problem solving. We applied PPD to a well established transactional web e-Commerce benchmark (TPC-W) in two deployment scenarios. PPD automatically identified four performance problems in the benchmark implementation and its deployment environment. By fixing the problems, we increased the maximum throughput of the benchmark from 1800 requests per second to more than 3500.


international conference on web engineering | 2013

Multi-tenancy performance benchmark for web application platforms

Rouven Krebs; Alexander Wert; Samuel Kounev

Cloud environments reduce data center operating costs through resource sharing and economies of scale. Infrastructure-as-a-Service is one example that leverages virtualization to share infrastructure resources. However, virtualization is often insufficient to provide Software-as-a-Service applications due to the need to replicate the operating system, middleware and application components for each customer. To overcome this problem, multi-tenancy has emerged as an architectural style that allows to share a single Web application instance among multiple independent customers, thereby significantly improving the efficiency of Software-as-a-Service offerings. A number of platforms are available today that support the development and hosting of multi-tenant applications by encapsulating multi-tenancy specific functionality. Although a lack of performance guarantees is one of the major obstacles to the adoption of cloud computing, in general, and multi-tenant applications, in particular, these kinds of applications and platforms have so far not been in the focus of the performance and benchmarking community. In this paper, we present an extended version of an existing and widely accepted application benchmark adding support for multi-tenant platform features. The benchmark is focused on evaluating the maximum throughput and the amount of tenants that can be served by a platform. We present a case study comparing virtualization and multi-tenancy. The results demonstrate the practical usability of the proposed benchmark in evaluating multi-tenant platforms and gives insights that help to decide for one sharing approach.


quality of software architectures | 2014

Automatic detection of performance anti-patterns in inter-component communications

Alexander Wert; Marius Oehler; Christoph Heger; Roozbeh Farahbod

Performance problems such as high response times in software applications have a significant effect on the customers satisfaction. In enterprise applications, performance problems are frequently manifested in inefficient or unnecessary communication patterns between software components originating from poor architectural design or implementation. Due to high manual effort, thorough performance analysis is often neglected, in practice. In order to overcome this problem, automated engineering approaches are required for the detection of performance problems. In this paper, we introduce several heuristics for measurement-based detection of well-known performance anti-patterns in inter-component communications. The detection heuristics comprise load and instrumentation descriptions for performance tests as well as corresponding detection rules. We integrate these heuristics with Dynamic Spotter, a framework for automatic detection of performance problems. We evaluate our heuristics on four evaluation scenarios based on an e-commerce benchmark (TPC-W) where the heuristics detect the expected communication performance anti-patterns and pinpoint their root causes.


Okanović, Dušan, van Hoorn, Andre, Heger, Christoph, Wert, Alexander and Siegl, Stefan (2016) Towards Performance Tooling Interoperability: An Open Format for Representing Execution Traces [Paper] In: 13th European Workshop on Performance Engineering (EPEW '16), October 5-7 2016, Chios, Greece. | 2016

Towards Performance Tooling Interoperability: An Open Format for Representing Execution Traces

Dušan Okanović; André van Hoorn; Christoph Heger; Alexander Wert; Stefan Siegl

Execution traces capture information on a software system’s runtime behavior, including data on system-internal software control flows, performance, as well as request parameters and values. In research and industrial practice, execution traces serve as an important basis for model-based and measurement-based performance evaluation, e.g., for application performance monitoring (APM), extraction of descriptive and prescriptive models, as well as problem detection and diagnosis. A number of commercial and open-source APM tools that allow the capturing of execution traces within distributed software systems is available. However, each of the tools uses its own (proprietary) format, which means that each approach building on execution trace data is tool-specific.


european dependable computing conference | 2016

Expert-Guided Automatic Diagnosis of Performance Problems in Enterprise Applications

Christoph Heger; André van Hoorn; Dušan Okanović; Stefan Siegl; Alexander Wert

Application performance management (APM) is a necessity to detect and solve performance problems during operation of enterprise applications. While existing tools provide alerting and visualization capabilities when performance requirements are violated during operation, the isolation and diagnosis of the problems real root cause is the responsibility of the rare performance expert, often resulting in a boring and recurring task. Main challenges for APM adoption in practice include that initial setup and maintenance of APM, and particularly the diagnosis of performance problems are error-prone, costly, and require a high manual effort and expertise. In this paper, we present preliminary work on diagnoseIT, an approach that utilizes formalized APM expert knowledge to automate the aforementioned recurring APM activities.


international conference on performance engineering | 2012

Integrating software performance curves with the palladio component model

Alexander Wert; Jens Happe; Dennis Westermann

Software performance engineering for enterprise applications is becoming more and more challenging as the size and complexity of software landscapes increases. Systems are built on powerful middleware platforms, existing software components, and 3rd party services. The internal structure of such a software basis is often unknown especially if business and system boundaries are crossed. Existing model-driven performance engineering approaches realise a pure top down prediction approach. Software architects have to provide a complete model of their system in order to conduct performance analyses. Measurement-based approaches depend on the availability of the complete system under test. In this paper, we propose a concept for the combination of model-driven and measurement-based performance engineering. We integrate software performance curves with the Palladio Component Model (PCM) (an advanced model-based performance prediction approach) in order to enable the evaluation of enterprise applications which depend on a large software basis.


Proceedings of the 18th international doctoral symposium on Components and architecture | 2013

Performance problem diagnostics by systematic experimentation

Alexander Wert

Performance problems such as high response times in software applications have a significant effect on the customers satisfaction. However, detecting performance problems is still a highly manual and cumbersome process requiring deep expertise in performance engineering. Uncovering performance problems and finding their root causes are two challenging problems which are not solved yet. Existing approaches either focus on certain types of performance problems, do not conduct root cause analysis or consider performance only under average load scenarios. In this PhD research proposal, we pursue the goal to support software engineers in uncovering performance problems and identifying their root causes. Based on a novel way of structuring the knowledge about performance problems, we propose an automatic, experimentation-based approach for diagnostics of performance problems. Utilizing monitoring data from operations, we aim at deriving performance tests which foster the detection of performance problems. Applying the methodology to software projects within SAP, we strive to ensure a profound evaluation of the proposed approach.


international conference on performance engineering | 2015

Generic Instrumentation and Monitoring Description for Software Performance Evaluation

Alexander Wert; Henning Schulz; Christoph Heger; Roozbeh Farahbod

Instrumentation and monitoring plays an important role in measurement-based performance evaluation of software systems. To this end, a large body of instrumentation and monitoring tools exist which, however, depend on proprietary and programming-language-specific instrumentation languages. Due to the lack of a common instrumentation language, it is difficult and expensive to port per se generic measurement-based performance evaluation approaches among different application contexts. In this work-in-progress paper, we address this issue by introducing a performance-oriented, generic meta-model for application-independent and tool independent description of instrumentation instructions. Decoupling the instrumentation description from its realization in a concrete application context, by a concrete instrumentation tool allows to design measurement based performance evaluation approaches in a generic and portable way.


automation of software test | 2015

AIM: adaptable instrumentation and monitoring for automated software performance analysis

Alexander Wert; Henning Schulz; Christoph Heger

Instrumentation and monitoring plays an important role in measurement-based performance analysis of software systems. However, in practice the performance overhead of extensive instrumentation is not negligible. Experiment-based performance analysis overcomes this problem through a series of experiments on selectively instrumented code, but requires additional manual effort to adjust required instrumentation and hence introduces additional costs. Automating the experiments and selective instrumentation can massively reduce the costs of performance analysis. Such automation, however, requires the capability of dynamically adapting instrumentation instructions. In this paper, we address this issue by introducing AIM, a novel instrumentation and monitoring approach for automated software performance analysis. We apply AIM to automate derivation of resource demands for an architectural performance model, showing that adaptable instrumentation leads to more accurate measurements compared to existing monitoring approaches.


international conference on performance engineering | 2015

DynamicSpotter: Automatic, Experiment-based Diagnostics of Performance Problems (Invited Demonstration Paper)

Alexander Wert

Performance problems in enterprise software applications can have a significant effect on the customers satisfaction. Detecting software performance problems and diagnosing their root causes in the testing phase as part of software development is of great importance in order to prevent unexpected performance behaviour of the software during operation. DynamicSpotter is a framework for experiment-based diagnosis of performance problems allowing to detect performance problems and their root causes fully automatically. Providing different kind of extension points, DynamicSpotter allows for utilizing external measurement tools for the execution of performance tests. Building upon an extensible knowledge base, DynamicSpotter provides means to extend the diagnostic capabilities with respect to detection of additional types of performance problems.

Collaboration


Dive into the Alexander Wert's collaboration.

Top Co-Authors

Avatar

Christoph Heger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Happe

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Koziolek

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge