Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Grebhahn is active.

Publication


Featured researches published by Alexander Grebhahn.


foundations of software engineering | 2015

Performance-influence models for highly configurable systems

Norbert Siegmund; Alexander Grebhahn; Sven Apel; Christian Kästner

Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of individual configuration options on performance are unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Our approach combines machine-learning and sampling heuristics in a novel way. It improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performance predictions one can make with them.


Software - Practice and Experience | 2013

JavAdaptor-Flexible runtime updates of Java applications

Mario Pukall; Christian Kästner; Walter Cazzola; Sebastian Götz; Alexander Grebhahn; Reimar Schröter; Gunter Saake

Software is changed frequently during its life cycle. New requirements come, and bugs must be fixed. To update an application, it usually must be stopped, patched, and restarted. This causes time periods of unavailability, which is always a problem for highly available applications. Even for the development of complex applications, restarts to test new program parts can be time consuming and annoying. Thus, we aim at dynamic software updates to update programs at runtime. There is a large body of research on dynamic software updates, but so far, existing approaches have shortcomings either in terms of flexibility or performance. In addition, some of them depend on specific runtime environments and dictate the programs architecture. We present JavAdaptor, the first runtime update approach based on Java that (a) offers flexible dynamic software updates, (b) is platform independent, (c) introduces only minimal performance overhead, and (d) does not dictate the program architecture. JavAdaptor combines schema changing class replacements by class renaming and caller updates with Java HotSwap using containers and proxies. It runs on top of all major standard Java virtual machines. We evaluate our approachs applicability and performance in non‐trivial case studies and compare it with existing dynamic software update approaches. Copyright


international conference on software engineering | 2011

JavAdaptor: unrestricted dynamic software updates for Java

Mario Pukall; Alexander Grebhahn; Reimar Schröter; Christian Kästner; Walter Cazzola; Sebastian Götz

Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracles current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the programs architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime.


european conference on parallel processing | 2014

ExaStencils: Advanced Stencil-Code Engineering

Christian Lengauer; Sven Apel; Matthias Bolten; Armin Größlinger; Frank Hannig; Harald Köstler; Ulrich Rüde; Jürgen Teich; Alexander Grebhahn; Stefan Kronawitter; Sebastian Kuckuk; Hannah Rittich; Christian Schmitt

Project ExaStencils pursues a radically new approach to stencil-code engineering. Present-day stencil codes are implemented in general-purpose programming languages, such as Fortran, C, or Java, or derivates thereof, and harnesses for parallelism, such as OpenMP, OpenCL or MPI. ExaStencils favors a much more domain-specific approach with languages at several layers of abstraction, the most abstract being the mathematical formulation, the most concrete the optimized target code. At every layer, the corresponding language expresses not only computational directives but also domain knowledge of the problem and platform to be leveraged for optimization. This approach will enable a highly automated code generation at all layers and has been demonstrated successfully before in the U.S. projects FFTW and SPIRAL for certain linear transforms.


very large data bases | 2013

QuEval: beyond high-dimensional indexing à la carte

Martin Schäler; Alexander Grebhahn; Reimar Schröter; Sandro Schulze; Veit Köppen; Gunter Saake

In the recent past, the amount of high-dimensional data, such as feature vectors extracted from multimedia data, increased dramatically. A large variety of indexes have been proposed to store and access such data efficiently. However, due to specific requirements of a certain use case, choosing an adequate index structure is a complex and time-consuming task. This may be due to engineering challenges or open research questions. To overcome this limitation, we present QuEval, an open-source framework that can be flexibly extended w.r.t. index structures, distance metrics, and data sets. QuEval provides a unified environment for a sound evaluation of different indexes, for instance, to support tuning of indexes. In an empirical evaluation, we show how to apply our framework, motivate benefits, and demonstrate analysis possibilities.


international conference on software engineering | 2015

Presence-condition simplification in highly configurable systems

Alexander von Rhein; Alexander Grebhahn; Sven Apel; Norbert Siegmund; Dirk Beyer; Thorsten Berger

For the analysis of highly configurable systems, analysis approaches need to take the inherent variability of these systems into account. The notion of presence conditions is central to such approaches. A presence condition specifies a subset of system configurations in which a certain artifact or a concern of interest is present (e.g., a defect associated with this subset). In this paper, we introduce and analyze the problem of presence-condition simplification. A key observation is that presence conditions often contain redundant information, which can be safely removed in the interest of simplicity and efficiency. We present a formalization of the problem, discuss application scenarios, compare different algorithms for solving the problem, and empirically evaluate the algorithms by means of a set of substantial case studies.


Parallel Processing Letters | 2014

Experiments on Optimizing the Performance of Stencil Codes with SPL Conqueror

Alexander Grebhahn; Sebastian Kuckuk; Christian Schmitt; Harald Köstler; Norbert Siegmund; Sven Apel; Frank Hannig; Jürgen Teich

A standard technique for numerically solving elliptic partial differential equations on structured grids is to discretize them, and, then, to apply an efficient geometric multi-grid solver. Unfortunately, finding the optimal choice of multi-grid components and parameter settings is challenging and existing auto-tuning techniques fail to explain performance-optimal settings. To improve the state of the art, we explore whether recent work on optimizing configurations of product lines can be applied to the stencil-code domain. In particular, we extend the domain-independent tool SPL Conqueror in an empirical study to predict the performance-optimal configurations of three geometric multi-grid stencil codes: a program using HIPAcc, the evaluation prototype HSMGP, and a program using DUNE. For HIPAcc, we reach an prediction accuracy of 96%, on average, measuring only 21.4% of all configurations; we predict a configuration that is nearly optimal after measuring less than 0.3% of all configurations. For HSMGP, w...


Concurrency and Computation: Practice and Experience | 2017

Performance‐influence models of multigrid methods: A case study on triangular grids

Alexander Grebhahn; Carmen Rodrigo; Norbert Siegmund; Francisco José Gaspar; Sven Apel

Multigrid methods are among the most efficient algorithms for solving discretized partial differential equations. Typically, a multigrid system offers various configuration options to tune performance for different applications and hardware platforms. However, knowing the best performing configuration in advance is difficult, because measuring all multigrid system variants is costly. Instead of direct measurements, we use machine learning to predict the performance of the variants. Selecting a representative set of configurations for learning is nontrivial, although, but key to prediction accuracy. We investigate different sampling strategies to determine the tradeoff between accuracy and measurement effort. In a nutshell, we learn a performance‐influence model that captures the influences of configuration options and their interactions on the time to perform a multigrid iteration and relate this to existing domain knowledge. In an experiment on a multigrid system working on triangular grids, we found that combining pair‐wise sampling with the D‐Optimal experimental design for selecting a learning set yields the most accurate predictions. After measuring less than 1 % of all variants, we were able to predict the performance of all variants with an accuracy of 95.9 %. Furthermore, we were able to verify almost all knowledge on the performance behavior of multigrid methods provided by 2 experts.


Software for Exascale Computing | 2016

Performance Prediction of Multigrid-Solver Configurations

Alexander Grebhahn; Norbert Siegmund; Harald Köstler; Sven Apel

Geometric multigrid solvers are among the most efficient methods for solving partial differential equations. To optimize performance, developers have to select an appropriate combination of algorithms for the hardware and problem at hand. Since a manual configuration of a multigrid solver is tedious and does not scale for a large number of different hardware platforms, we have been developing a code generator that automatically generates a multigrid-solver configuration tailored to a given problem. However, identifying a performance-optimal solver configuration is typically a non-trivial task, because there is a large number of configuration options from which developers can choose. As a solution, we present a machine-learning approach that allows developers to make predictions of the performance of solver configurations, based on quantifying the influence of individual configuration options and interactions between them. As our preliminary results on three configurable multigrid solvers were encouraging, we focus on a larger, non-tivial case-study in this work. Furthermore, we discuss and demonstrate how to integrate domain knowledge in our machine-learning approach to improve accuracy and scalability and to explore how the performance models we learn can help developers and domain experts in understanding their system.


Software and Systems Modeling | 2018

Tradeoffs in modeling performance of highly configurable software systems

Sergiy S. Kolesnikov; Norbert Siegmund; Christian Kästner; Alexander Grebhahn; Sven Apel

Modeling the performance of a highly configurable software system requires capturing the influences of its configuration options and their interactions on the system’s performance. Performance-influence models quantify these influences, explaining this way the performance behavior of a configurable system as a whole. To be useful in practice, a performance-influence model should have a low prediction error, small model size, and reasonable computation time. Because of the inherent tradeoffs among these properties, optimizing for one property may negatively influence the others. It is unclear, though, to what extent these tradeoffs manifest themselves in practice, that is, whether a large configuration space can be described accurately only with large models and significant resource investment. By means of 10 real-world highly configurable systems from different domains, we have systematically studied the tradeoffs between the three properties. Surprisingly, we found that the tradeoffs between prediction error and model size and between prediction error and computation time are rather marginal. That is, we can learn accurate and small models in reasonable time, so that one performance-influence model can fit different use cases, such as program comprehension and performance prediction. We further investigated the reasons for why the tradeoffs are marginal. We found that interactions among four or more configuration options have only a minor influence on the prediction error and that ignoring them when learning a performance-influence model can save a substantial amount of computation time, while keeping the model small without considerably increasing the prediction error. This is an important insight for new sampling and learning techniques as they can focus on specific regions of the configuration space and find a sweet spot between accuracy and effort. We further analyzed the causes for the configuration options and their interactions having the observed influences on the systems’ performance. We were able to identify several patterns across subject systems, such as dominant configuration options and data pipelines, that explain the influences of highly influential configuration options and interactions, and give further insights into the domain of highly configurable systems.

Collaboration


Dive into the Alexander Grebhahn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Schäler

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Gunter Saake

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Veit Köppen

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harald Köstler

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Reimar Schröter

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Christian Schmitt

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Sebastian Kuckuk

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge