Andreas Sewe
Technische Universität Darmstadt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Sewe.
international conference on software engineering | 2011
Eric Bodden; Andreas Sewe; Jan Sinschek; Hela Oueslati; Mira Mezini
Static program analyses and transformations for Java face many problems when analyzing programs that use reflection or custom class loaders: How can a static analysis know which reflective calls the program will execute? How can it get hold of classes that the program loads from remote locations or even generates on the fly? And if the analysis transforms classes, how can these classes be re-inserted into a program that uses custom class loaders? In this paper, we present TamiFlex, a tool chain that offers a partial but often effective solution to these problems. With TamiFlex, programmers can use existing static-analysis tools to produce results that are sound at least with respect to a set of recorded program runs. TamiFlex inserts runtime checks into the program that warn the user in case the program executes reflective calls that the analysis did not take into account. TamiFlex further allows programmers to re-insert offline-transformed classes into a program. We evaluate TamiFlex in two scenarios: benchmarking with the DaCapo benchmark suite and analysing large-scale interactive applications. For the latter, TamiFlex significantly improves code coverage of the static analyses, while for the former our approach even appears complete: the inserted runtime checks issue no warning. Hence, for the first time, TamiFlex enables sound static whole-program analyses on DaCapo. During this process, TamiFlex usually incurs less than 10% runtime overhead.
conference on object-oriented programming systems, languages, and applications | 2011
Andreas Sewe; Mira Mezini; Aibek Sarimbekov; Walter Binder
Originally conceived as the target platform for Java alone, the Java Virtual Machine (JVM) has since been targeted by other languages, one of which is Scala. This trend, however, is not yet reflected by the benchmark suites commonly used in JVM research. In this paper, we thus present the design and analysis of the first full-fledged benchmark suite for Scala. We furthermore compare the benchmarks contained therein with those from the well-known DaCapo 9.12 benchmark suite and show where the differences are between Scala and Java code---and where not.
principles and practice of programming in java | 2011
Aibek Sarimbekov; Andreas Sewe; Walter Binder; Philippe Moret; Martin Schoeberl; Mira Mezini
Calling-context profiles and dynamic metrics at the bytecode level are important for profiling, workload characterization, program comprehension, and reverse engineering. Prevailing tools for collecting calling-context profiles or dynamic bytecode metrics often provide only incomplete information or suffer from limited compatibility with standard JVMs. However, completeness and accuracy of the profiles is essential for tasks such as workload characterization, and compatibility with standard JVMs is important to ensure that complex workloads can be executed. In this paper, we present the design and implementation of JP2, a new tool that profiles both the inter- and intra-procedural control flow of workloads on standard JVMs. JP2 produces calling-context profiles preserving callsite information, as well as execution statistics at the level of individual basic blocks of code. JP2 is complemented with scripts that compute various dynamic bytecode metrics from the profiles. As a case-study and tutorial on the use of JP2, we use it for cross-profiling for an embedded Java processor.
international symposium on memory management | 2012
Andreas Sewe; Mira Mezini; Aibek Sarimbekov; Danilo Ansaloni; Walter Binder; Nathan P. Ricci; Samuel Z. Guyer
While often designed with a single language in mind, managed runtimes like the Java virtual machine (JVM) have become the target of not one but many languages, all of which benefit from the runtimes services. One of these services is automatic memory management. In this paper, we compare and contrast the memory behaviour of programs written in Java and Scala, respectively, two languages which both target the same platform: the JVM. We both analyze core object demographics like object lifetimes as well as secondary properties of objects like their associated monitors and identity hash-codes. We find that objects in Scala programs have lower survival rates and higher rates of immutability, which is only partly explained by the memory behaviour of objects representing closures or boxed primitives. Other metrics vary more by benchmark than language.
The Journal of Object Technology | 2012
Christoph Bockisch; Andreas Sewe; Haihan Yin; Mira Mezini; Mehmet Aksit
New programming languages supporting advanced modularization mechanisms are often implemented as transformations to the imperative intermediate representation of an already established language. But while their core constructs largely overlap in semantics, re-using the corresponding transformations requires re-using their syntax as well; this is limiting. In the ALIA4J approach, we identified dispatching as fundamental to most modularization mechanisms and provide a meta-model of dispatching as a rich, extensible intermediate language. Based on this meta-model, one can modularly implement the semantics of dispatching-related constructs. From said constructs a single execution model can then be derived which facilitates interpretation, bytecode generation, and even optimized machine-code generation. We show the suitability of our approach by mapping five popular languages to this meta-model and find that most of their constructs are shared across multiple languages. We furthermore present implementations of the three different execution strategies together with a generic visual debugger available to any ALIA4J-based language implementation. Intertwined with this paper is a tutorial-style running example that illustrates our approach.
TOOLS'12 Proceedings of the 50th international conference on Objects, Models, Components, Patterns | 2012
Yudi Zheng; Danilo Ansaloni; Lukáš Marek; Andreas Sewe; Walter Binder; Alex Villazón; Petr Tuma; Zhengwei Qi; Mira Mezini
Bytecode instrumentation is a key technique for the implementation of dynamic program analysis tools such as profilers and debuggers. Traditionally, bytecode instrumentation has been supported by low-level bytecode engineering libraries that are difficult to use. Recently, the domain-specific aspect language DiSL has been proposed to provide high-level abstractions for the rapid development of efficient bytecode instrumentations. While DiSL supports user-defined expressions that are evaluated at weave-time, the DiSL programming model requires these expressions to be implemented in separate classes, thus increasing code size and impairing code readability and maintenance. In addition, the DiSL weaver may produce a significant amount of dead code, which may impair some optimizations performed by the runtime. In this paper we introduce Turbo, a novel partial evaluator for DiSL, which processes the generated instrumentation code, performs constant propagation, conditional reduction, and pattern-based code simplification, and executes pure methods at weave-time. With Turbo, it is often unnecessary to wrap expressions for evaluation at weave-time in separate classes, thus simplifying the programming model. We present Turbos partial evaluation algorithm and illustrate its benefits with several case studies. We evaluate the impact of Turbo on weave-time performance and on runtime performance of the instrumented application.
Sigplan Notices | 2014
Lukáš Marek; Stephen Kell; Yudi Zheng; Lubomír Bulej; Walter Binder; Petr Tůma; Danilo Ansaloni; Aibek Sarimbekov; Andreas Sewe
Dynamic analysis tools are often implemented using instrumentation, particularly on managed runtimes including the Java Virtual Machine (JVM). Performing instrumentation robustly is especially complex on such runtimes: existing frameworks offer limited coverage and poor isolation, while previous work has shown that apparently innocuous instrumentation can cause deadlocks or crashes in the observed application. This paper describes ShadowVM, a system for instrumentation-based dynamic analyses on the JVM which combines a number of techniques to greatly improve both isolation and coverage. These centre on the offload of analysis to a separate process; we believe our design is the first system to enable genuinely full bytecode coverage on the JVM. We describe a working implementation, and use a case study to demonstrate its improved coverage and to evaluate its runtime overhead.
Electronic Notes in Theoretical Computer Science | 2011
Aibek Sarimbekov; Philippe Moret; Walter Binder; Andreas Sewe; Mira Mezini
Calling context profiling collects statistics separately for each calling context. Complete calling context profiles that faithfully represent overall program execution are important for a sound analysis of program behavior, which in turn is important for program understanding, reverse engineering, and workload characterization. Many existing calling context profilers for Java rely on sampling or on incomplete instrumentation techniques, yielding incomplete profiles; others rely on Java Virtual Machine (JVM) modifications or work only with one specific JVM, thus compromising portability. In this paper we present a new calling context profiler for Java that reconciles completeness of the collected profiles and full compatibility with any standard JVM. In order to reduce measurement perturbation, our profiler collects platform-independent dynamic metrics, such as the number of method invocations and the number of executed bytecodes. In contrast to prevailing calling context profilers, our tool is able to distinguish between multiple call sites in a method and supports selective profiling of (the dynamic extent of) certain methods. We have evaluate the overhead introduced by our profiler with standard Java and Scala benchmarks on a range of different JVMs.
workshop on program analysis for software tools and engineering | 2013
Aibek Sarimbekov; Andreas Sewe; Stephen Kell; Yudi Zheng; Walter Binder; Lubomír Bulej; Danilo Ansaloni
The Java Virtual Machine (JVM) today hosts implementations of numerous languages. To achieve high performance, JVM implementations rely on heuristics in choosing compiler optimizations and adapting garbage collection behavior. Historically, these heuristics have been tuned to suit the dynamics of Java programs only. This leads to unnecessarily poor performance in case of non-Java languages, which often exhibit systematic differences in workload behavior. Dynamic metrics characterizing the workload help to identify and quantify useful optimizations, but so far, no cohesive suite of metrics has adequately covered properties that vary systematically between Java and non-Java workloads. We present a suite of such metrics, justifying our choice with reference to a range of guest languages. These metrics are implemented on a common portable infrastructure which ensures ease of deployment and customization.
TOOLS'11 Proceedings of the 49th international conference on Objects, models, components, patterns | 2011
Christoph Bockisch; Andreas Sewe; Mira Mezini; Mehmet Aksit
New programming languages that allow to reduce the complexity of software solutions are frequently developed, often as extensions of existing languages. Many implementations thus resort to transforming the extensions source code to the imperative intermediate representation of the parent language. But approaches like compiler frameworks only allow for re-use of code transformations for syntactically-related languages; they do not allow for re-use across language families. In this paper, we present the ALIA4J approach to bring such re-use to language families with advanced dispatching mechanisms like pointcut-advice or predicate dispatching. ALIA4J introduces a meta-model of dispatching as a rich, extensible intermediate language. By implementing language constructs from four languages as refinements of this meta-model, we show that a significant amount of them can be re-used across language families. Another building block of ALIA4J is a framework for execution environments that automatically derives an execution model of the programs dispatching from representations in our intermediate language. This model enables different execution strategies for dispatching; we have validated this by implementing three execution environments whose strategies range from interpretation to optimizing code generation.