Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Prantl is active.

Publication


Featured researches published by Adrian Prantl.


Real-time Systems | 2010

Transforming flow information during code optimization for timing analysis

Raimund Kirner; Peter P. Puschner; Adrian Prantl

The steadily growing embedded-systems market comprises many application domains in which real-time constraints must be satisfied. To guarantee that these constraints are met, the analysis of the worst-case execution time (WCET) of software components is mandatory. In general WCET analysis needs additional control-flow information, which may be provided manually by the user or calculated automatically by program analysis. For flexibility and simplicity reasons it is desirable to specify the flow information at the same level at which the program is developed, i.e., at the source level. In contrast, to obtain precise WCET bounds the WCET analysis has to be performed at machine-code level. Mapping and transforming the flow information from the source-level down to the machine code, where flow information is used in the WCET analysis, is challenging, even more so if the compiler generates highly optimized code.In this article we present a method for transforming flow information from source code to machine code. To obtain a mapping that is safe and accurate, flow information is transformed in parallel to code transformations performed by an optimizing compiler. This mapping is not only useful for transforming manual code annotations but also if platform-independent flow information is automatically calculated at the source level.We show that our method can be applied to every type of semantics-preserving code transformation. The precision of this flow-information transformation allows its users to calculate tight WCET bounds.


ieee international conference on high performance computing data and analytics | 2012

High-performance language interoperability for scientific computing through Babel

Thomas Epperly; Gary Kumfert; Tamara L. Dahlgren; Dietmar Ebner; James Leek; Adrian Prantl; Scott R. Kohn

High-performance scientific applications are usually built from software modules written in multiple programming languages. This raises the issue of language interoperability which involves making calls between languages, converting basic types, and bridging disparate programming models. Babel provides a feature-rich, extensible, high-performance solution to the language interoperability problem currently supporting C, C++, FORTRAN 77, Fortran 90/95, Fortran 2003/2008, Python, and Java. Babel supports object-oriented programming features and interface semantics with runtime enforcement. In addition to in-process language interoperability, Babel includes remote method invocation to support hybrid parallel and distributed computing paradigms.


Software and Systems Modeling | 2011

Beyond loop bounds: comparing annotation languages for worst-case execution time analysis

Raimund Kirner; Jens Knoop; Adrian Prantl; Markus Schordan; Albrecht Kadlec

Worst-case execution time (WCET) analysis is concerned with computing a precise-as-possible bound for the maximum time the execution of a program can take. This information is indispensable for developing safety-critical real-time systems, e.xa0g., in the avionics and automotive fields. Starting with the initial works of Chen, Mok, Puschner, Shaw, and others in the mid and late 1980s, WCET analysis turned into a well-established and vibrant field of research and development in academia and industry. The increasing number and diversity of hardware and software platforms and the ongoing rapid technological advancement became drivers for the development of a wide array of distinct methods and tools for WCET analysis. The precision, generality, and efficiency of these methods and tools depend much on the expressiveness and usability of the annotation languages that are used to describe feasible and infeasible program paths. In this article we survey the annotation languages which we consider formative for the field. By investigating and comparing their individual strengths and limitations with respect to a set of pivotal criteria, we provide a coherent overview of the state of the art. Identifying open issues, we encourage further research. This way, our approach is orthogonal and complementary to a recent approach of Wilhelm etxa0al. who provide a thorough survey of WCET analysis methods and tools that have been developed and used in academia and industry.


International Journal on Software Tools for Technology Transfer | 2014

Combining static analysis and state transition graphs for verification of event-condition-action systems in the RERS 2012 and 2013 challenges

Markus Schordan; Adrian Prantl

We present a combination of approaches for the verification of event-condition-action (ECA) systems. The analyzed ECA systems range from structurally simple to structurally complex systems. We address the verification of reachability properties and behavioral properties. Reachability properties are represented by assertions in the program and we determine statically whether an assertion holds for all execution paths. Behavioral properties are represented as linear temporal logic formulas specifying the input/output behavior of the program. Our approach assumes a finite state space. We compare a symbolic analysis with an exhaustive state space exploration and discuss the trade-offs between the approaches in terms of the number of computed states and run-time behavior. All variants compute a state transition graph which can also be passed to an LTL verifier. The variants have a different impact on the number of computed states in the state transition graph which in turn impacts the run-time and memory consumption of subsequent phases. We evaluate the different analysis variants with the RERS benchmarks.


leveraging applications of formal methods | 2010

Source-level support for timing analysis

Gergö Barany; Adrian Prantl

Timing analysis is an important prerequisite for the design of embedded real-time systems. In order to get tight and safe bounds for the timing of a program, precise information about its control flow and data flow is needed. While actual timings can only be derived from the machine code, many of the supporting analyses (deriving timing-relevant data such as points-to and loop bound information) operate much more effectively on the source code level. At this level, they can use highlevel information that would otherwise be lost during the compilation to machine code. n nDuring the optimization stage, compilers often apply transformations, such as loop unrolling, that modify the programs control flow. Such transformations can invalidate information derived from the source code. In our approach, we therefore apply such optimizations already at the source-code level and transform the analyzed information accordingly. This way, we can marry the goals of precise timing analysis and optimizing compilation. n nIn this article we present our implementation of this concept within the SATIrE source-to-source analysis and transformation framework. SATIrE forms the basis for the TuBound timing analyzer. In the ALL-TIMES EU FP7 project we extended SATIrE to exchange timing-relevant analysis data with other European timing analysis tools. In this context, we explain how timing-relevant information from the source code level can be communicated to a wide variety of tools that apply various forms of static and dynamic analysis on different levels.


parallel computing | 2015

Global transformations for legacy parallel applications via structural analysis and rewriting

Daniel G. Chavarría-Miranda; Ajay Panyala; Wenjing Ma; Adrian Prantl; Sriram Krishnamoorthy

Performance and scalability optimization of large HPC applications is currently a labor-intensive, manual process with very low productivity. Major difficulties come from the disaggregated environment for HPC application development: the compiler is only involved in local decisions (core or multithreaded domain), while a library-based, communication-oriented programming model realizes whole-machine parallelism. Realizing any major global change in such a disaggregated environment is very difficult and involves changing large portions of the source code. We present semi-automated techniques, based on structural analysis and rewriting, for performing global transformations on an HPC application source code. We present two case studies using the Self-Consistent Field (SCF) standalone benchmark as well as the Coupled Cluster (CCSD) module (2.9 million lines of Fortran code), a key module of the NWChem computational chemistry application. We demonstrate how structural rewriting techniques can be used to automate transformations that affect multiple sections of the application’s source code. We show that the transformations can be applied in a systematic fashion across the source code bases with minimal manual effort. These transformations improve the scalability of the SCF benchmark by more than two orders of magnitude and the performance of the full CCSD module by a factor of four.


Computational Science & Discovery | 2013

Cross-language Babel structs?making scientific interfaces more efficient

Adrian Prantl; Dietmar Ebner; Thomas Epperly

Babel is an open-source language interoperability framework tailored to the needs of high-performance scientific computing. As an integral element of the Common Component Architecture, it is employed in a wide range of scientific applications where it is used to connect components written in different programming languages. In this paper we describe how we extended Babel to support interoperable tuple data types (structs). Structs are a common idiom in (mono-lingual) scientific application programming interfaces (APIs); they are an efficient way to pass tuples of nonuniform data between functions, and are supported natively by most programming languages. Using our extended version of Babel, developers of scientific codes can now pass structs as arguments between functions implemented in any of the supported languages. In C, C++, Fortran 2003/2008 and Chapel, structs can be passed without the overhead of data marshaling or copying, providing language interoperability at minimal cost. Other supported languages are Fortran 77, Fortran 90/95, Java and Python. We will show how we designed a struct implementation that is interoperable with all of the supported languages and present benchmark data to compare the performance of all language bindings, highlighting the differences between languages that offer native struct support and an object-oriented interface with getter/setter methods. A case study shows how structs can help simplify the interfaces of scientific codes significantly.


Archive | 2012

COMPOSE-HPC: A TRANSFORMATIONAL APPROACH TO EXASCALE

David E. Bernholdt; Benjamin A. Allan; Robert C. Armstrong; Daniel G. Chavarría-Miranda; Tamara L. Dahlgren; Wael R. Elwasif; Tom Epperly; Samantha S. Foley; Geoffrey Compton Hulette; Sriram Krishnamoorthy; Adrian Prantl; Ajay Panyala; Matthew J. Sottile

The goal of the COMPOSE-HPC project is to “democratize” tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, which include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.


ieee international conference on high performance computing data and analytics | 2011

Poster: connecting PGAS and traditional HPC languages

Adrian Prantl; Thomas Epperly; Shams Imam

Chapel is a high-level parallel programming language that implements a partitioned global address space model (PGAS). Programs written in this programming model have traditionally been self-contained entities written entirely in one language.n On our poster, we present BRAID, which enables Chapel programs to call functions and instantiate objects written in C, C++, Fortran 77-2008, Java and Python. Our tool creates language bindings that are binary-compatible with those generated by the Babel language interoperability tool. The scientific community maintains a large amount of code written in traditional languages. With the help of our tool, users will gain access to their existing codebase with minimal effort and through a well-defined interface. The language bindings are designed to provide a good combination of performance and flexibility (including transparent access to distributed arrays).n Knowing the demands of the target audience, we support the full Babel array API. A particular contribution is that we expose Chapels distributed data types through our interface and make them accessible to external functions implemented in traditional serial programming languages.n The advantages of our approach are highlighted by benchmarks that compare the performance of pure Chapel programs with that of hybrid versions that call subroutines implemented in Babel-supported languages inside of parallel loops. We also present our vision for interoperability with other PGAS languages such as UPC and X10.


worst case execution time analysis | 2008

TuBound - A Conceptually New Tool for Worst-Case Execution Time Analysis

Adrian Prantl; Markus Schordan; Jens Knoop

Collaboration


Dive into the Adrian Prantl's collaboration.

Top Co-Authors

Avatar

Raimund Kirner

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Markus Schordan

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jens Knoop

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Albrecht Kadlec

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Epperly

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jakob Zwirchmayr

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Zolda

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge