Jakob Engblom
Uppsala University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jakob Engblom.
ACM Transactions in Embedded Computing Systems | 2008
Reinhard Wilhelm; Jakob Engblom; Andreas Ermedahl; Niklas Holsti; Stephan Thesing; David B. Whalley; Guillem Bernat; Christian Ferdinand; Reinhold Heckmann; Tulika Mitra; Frank Mueller; Isabelle Puaut; Peter P. Puschner; Jan Staschulat; Per Stenström
The determination of upper bounds on execution times, commonly called worst-case execution times (WCETs), is a necessary step in the development and validation process for hard real-time systems. This problem is hard if the underlying processor architecture has components, such as caches, pipelines, branch prediction, and other speculative components. This article describes different approaches to this problem and surveys several commercially available tools1 and research prototypes.
real-time systems symposium | 2000
Jakob Engblom; Andreas Ermedahl
Knowing the worst-case execution time (WCET) of a program is necessary when designing and verifying real-time systems. The WCET depends both on the program flow (like loop iterations and function calls), and on hardware factors like caches and pipelines. In this paper, we present a method for representing program flow information that is compact while still being strong enough to handle the types of flow previously considered in WCET research. We also extend the set of representable flows compared to previous research. We give an algorithm for converting the flow information to the linear constraints used in calculating a WCET estimate in our WCET analysis tool. We demonstrate the practicality of the representation by modeling the flow of a number of programs, and show that execution time estimates can be made tighter by using flow information.
International Journal on Software Tools for Technology Transfer | 2003
Jakob Engblom; Andreas Ermedahl; Mikael Sjödin; Jan Gustafsson; Hans Hansson
In this article we give an overview of the worst-case execution time (WCET) analysis research performed by the WCET group of the ASTEC Competence Centre at Uppsala University. Knowing the WCET of a program is necessary when designing and verifying real-time systems. The WCET depends both on the program flow, such as loop iterations and function calls, and on hardware factors, such as caches and pipelines. WCET estimates should be both safe (no underestimation allowed) and tight (as little overestimation as possible). We have defined a modular architecture for a WCET tool, used both to identify the components of the overall WCET analysis problem, and as a starting point for the development of a WCET tool prototype. Within this framework we have proposed solutions to several key problems in WCET analysis, including representation and analysis of the control flow of programs, modeling of the behavior and timing of pipelines and other low-level timing aspects, integration of control flow information and low-level timing to obtain a safe and tight WCET estimate, and validation of our tools and methods. We have focussed on the needs of embedded real-time systems in designing our tools and directing our research. Our long-term goal is to provide WCET analysis as a part of the standard tool chain for embedded development (together with compilers, debuggers, and simulators). This is facilitated by our cooperation with the embedded systems programming-tools vendor IAR Systems.
compilers, architecture, and synthesis for embedded systems | 2001
Friedhelm Stappert; Andreas Ermedahl; Jakob Engblom
Current development tools for embedded real-time systems do not efficiently support the timing aspect. The most important timing parameter for scheduling and system analysis is the Worst-Case Execution Time (WCET) of a program.This paper presents a fast and effective WCET calculation method that takes account of low-level machine aspects like pipelining and caches, and high-level program flow like loops and infeasible paths. The method is more efficient than previous path-based approaches, and can easily handle complex programs. By separating the low-level from the high-level analysis, the method is easy to retarget.Experiments confirm that speed does not sacrifice precision, and that programs with extreme numbers of potential execution paths can be analyzed quickly.
embedded and real-time computing systems and applications | 1999
Jakob Engblom; Andreas Ermedahl
We present a technique for worst-case execution time (WCET) analysis for pipelined processors. Our technique uses a standard simulator instead of special-purpose pipeline modeling. Our technique handles CPUs that execute multiple shorter instructions in parallel with long-running instructions. The results of other machine analyses, like cache analysis, can be used in our pipeline analysis. Also, results from high-level program flow analysis can be used to tighten the execution time predictions. Our primary target is embedded real-time systems, and since processor simulators are standard equipment for embedded development work, our tool will be easy to port to relevant target processors.
real time technology and applications symposium | 2003
Jakob Engblom
This paper investigates how dynamic branch prediction in a microprocessor affects the predictability of execution time for software running on that processor. By means of experiments on a number of real processors employing various forms of branch prediction, we evaluate the impact of branch predictors on execution time predictability. The results indicate that dynamic branch predictors give a high and hard-to-predict variation in the execution time of even very simple loops, and that the execution time effects of branch mispredictions can be very large relative to the execution time of regular instructions. We have observed some cases where executing more iterations of a loop actually take less time than executing fewer iterations, due to the effect of dynamic branch predictors. We conclude that current dynamic branch predictions schemes are not suitable for use in real-time systems where execution time predictability is desired.
euromicro conference on real-time systems | 1998
Jakob Engblom; Andreas Ermedahl; Peter Altenbernd
The authors present co-transformation, a novel approach to the mapping of execution information from the source code of a program to the object code for the purpose of worst-case execution time (WCET) analysis. Their approach is designed to handle the problems introduced by optimizing compilers, i.e. that the structure of the object code is very different from the structure of the source code. The co-transformer allows one to keep track of how different compiler transformations, including optimizations, influence the execution time of a program. This allows one to statically calculate the execution time of a program at the object code level, using information about the program execution obtained at the source code level.
real time technology and applications symposium | 1999
Jakob Engblom
We have used a modified C compiler to analyze a large number of commercial real time and embedded applications written in C for 8- and 18-bit processors. Only static aspects of the programs have been studied i.e., such information that can be obtained from the source code without running the programs. The purpose of the study is to provide guidance for the development of worst-case execution time (WCET) analysis tools, and to increase the knowledge about embedded programs in general. Knowing how real programs are written makes it easier to focus research in relevant areas and set priorities. The conclusion is that real time and embedded programs are not necessarily simple just because they are written for small machines. This indicates that real life WCET analysis tools need to handle advanced programming constructions, including function pointer calls and recursion.
embedded software | 2002
Jakob Engblom; Bengt Jonsson
When developing real-time systems, the worst-case execution time (WCET) is a commonly used measure for predicting and analyzing program and system timing behavior. Such estimates should preferrably be provided by static WCET analysis tools. Their analysis is made difficult by features of common processors, such as pipelines and caches.This paper examines the properties of single-issue in-order pipelines, based on a mathematical model of temporal constraints. The key problem addressed is to determine the distance (measured in number of subsequent instructions) over which an instruction can affect the timing behavior of other instructions, and when this effect must be considered in static WCET analysis. We characterize classes of pipelines for which static analysis can safely ignore effects longer than some arbitrary threshold. For other classes of pipelines, pipeline effects can propagate across arbitrary numbers of instructions, making it harder to design safe and precise analysis methods.Based on our results, we discuss how to construct safe WCET analysis methods. We also prove when it is correct to use localworst-case approximations to construct an overall WCET estimate.
languages, compilers, and tools for embedded systems | 1999
Jakob Engblom
The SpecInt95 benchmark suite is often used to evaluate the performance of programming tools, including those used for embedded systems programming. Embedded applications, however, are often targeting 8- or 16-bit processors with limited functionality, whereas SpecInt95 has no particular target architecture and a bias towards 32-bit systems. Hence, there are reasons to question the use of SpecInt95 for the evaluation of tools for embedded systems.We present a comparative study of the static properties of a set of embedded application and the SpecInt95 benchmarks. The properties studied include: variable types, function argument lists, type of operations, and the use of local and global memory.The study provides-clear evidence that embedded applications and the SpecInt95 program suite differs significantly in several important areas. Hence, we conclude that using SpecInt95 to evaluate or compare tools for embedded systems is likely to be irrelevant or misleading, and that there is a clear need for a benchmark suite tailored for the embedded applications area.