Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaume Abella is active.

Publication


Featured researches published by Jaume Abella.


euromicro conference on real-time systems | 2012

Measurement-Based Probabilistic Timing Analysis for Multi-path Programs

Liliana Cucu-Grosjean; Luca Santinelli; Michael Houston; Code Lo; Tullio Vardanega; Leonidas Kosmidis; Jaume Abella; Enrico Mezzetti; Eduardo Quiñones; Francisco J. Cazorla

The rigorous application of static timing analysis requires a large and costly amount of detail knowledge on the hardware and software components of the system. Probabilistic Timing Analysis has potential for reducing the weight of that demand. In this paper, we present a sound measurement-based probabilistic timing analysis technique based on Extreme Value Theory. In all the experiments made as part of this work, the timing bounds determined by our technique were less than 15% pessimistic in comparison with the tightest possible bounds obtainable with any probabilistic timing analysis technique. As a point of interest to industrial users, our technique also requires a comparatively low number of measurement runs of the program under analysis, less than 650 runs were needed for the benchmarks presented in this paper.


ACM Transactions in Embedded Computing Systems | 2013

PROARTIS: Probabilistically Analyzable Real-Time Systems

Francisco J. Cazorla; Eduardo Quiñones; Tullio Vardanega; Liliana Cucu; Benoit Triquet; Guillem Bernat; Emery D. Berger; Jaume Abella; Franck Wartel; Michael Houston; Luca Santinelli; Leonidas Kosmidis; Code Lo; Dorin Maxim

Static timing analysis is the state-of-the-art practice of ascertaining the timing behavior of current-generation real-time embedded systems. The adoption of more complex hardware to respond to the increasing demand for computing power in next-generation systems exacerbates some of the limitations of static timing analysis. In particular, the effort of acquiring (1) detailed information on the hardware to develop an accurate model of its execution latency as well as (2) knowledge of the timing behavior of the program in the presence of varying hardware conditions, such as those dependent on the history of previously executed instructions. We call these problems the timing analysis walls. In this vision-statement article, we present probabilistic timing analysis, a novel approach to the analysis of the timing behavior of next-generation real-time embedded systems. We show how probabilistic timing analysis attacks the timing analysis walls; we then illustrate the mathematical foundations on which this method is based and the challenges we face in the effort of efficiently implementing it. We also present experimental evidence that shows how probabilistic timing analysis reduces the extent of knowledge about the execution platform required to produce probabilistically accurate WCET estimations.


ACM Transactions on Architecture and Code Optimization | 2005

IATAC: a smart predictor to turn-off L2 cache lines

Jaume Abella; Antonio González; Xavier Vera; Michael F. P. O'Boyle

As technology evolves, power dissipation increases and cooling systems become more complex and expensive. There are two main sources of power dissipation in a processor: dynamic power and leakage. Dynamic power has been the most significant factor, but leakage will become increasingly significant in future. It is predicted that leakage will shortly be the most significant cost as it grows at about a 5× rate per generation. Thus, reducing leakage is essential for future processor design. Since large caches occupy most of the area, they are one of the leakiest structures in the chip and hence, a main source of energy consumption for future processors.This paper introduces IATAC (inter-access time per access count), a new hardware technique to reduce cache leakage for L2 caches. IATAC dynamically adapts the cache size to the program requirements turning off cache lines whose content is not likely to be reused. Our evaluation shows that this approach outperforms all previous state-of-the-art techniques. IATAC turns off 65% of the cache lines across different L2 cache configurations with a very small performance degradation of around 2%.


international symposium on microarchitecture | 2009

Low Vccmin fault-tolerant cache with highly predictable performance

Jaume Abella; Javier Carretero; Pedro Chaparro; Xavier Vera; Antonio González

Transistors per area unit double in every new technology node. However, the electric field density and power demand grow if Vcc is not scaled. Therefore, Vcc must be scaled in pace with new technology nodes to prevent excessive degradation and keep power demand within reasonable limits. Unfortunately, low Vcc operation exacerbates the effect of variations and decreases noise and stability margins, increasing the likelihood of errors in SRAM memories such as caches. Those errors translate into performance loss and performance variation across different cores, which is especially undesirable in a multi-core processor. This paper presents (i) a novel scheme to tolerate high faulty bit rates in caches by disabling only faulty subblocks, (ii) a dynamic address remapping scheme to reduce performance variation across different cores, which is key for performance predictability, and (iii) a comparison with state-of-the-art techniques for faulty bit tolerance in caches. Results for some typical first level data cache configurations show 15% average performance increase and standard deviation reduction from 3.13% down to 0.55% when compared to cache line disabling schemes.


design, automation, and test in europe | 2013

A cache design for probabilistically analysable real-time systems

Leonidas Kosmidis; Jaume Abella; Eduardo Quiñones; Francisco J. Cazorla

Caches provide significant performance improvements, though their use in real-time industry is low because current WCET analysis tools require detailed knowledge of programs cache accesses to provide tight WCET estimates. Probabilistic Timing Analysis (PTA) has emerged as a solution to reduce the amount of information needed to provide tight WCET estimates, although it imposes new requirements on hardware design. At cache level, so far only fully-associative random-replacement caches have been proven to fulfill the needs of PTA, but they are expensive in size and energy. In this paper we propose a cache design that allows set-associative and direct-mapped caches to be analysed with PTA techniques. In particular we propose a novel parametric random placement suitable for PTA that is proven to have low hardware complexity and energy consumption while providing comparable performance to that of conventional modulo placement.


international symposium on industrial embedded systems | 2013

Measurement-based probabilistic timing analysis: Lessons from an integrated-modular avionics case study

Franck Wartel; Leonidas Kosmidis; Code Lo; Benoit Triquet; Eduardo Quiñones; Jaume Abella; Adriana Gogonel; Andrea Baldovin; Enrico Mezzetti; Liliana Cucu; Tullio Vardanega; Francisco J. Cazorla

Probabilistic Timing Analysis (PTA) in general and its measurement-based variant called MBPTA in particular can mitigate some of the problems that impair current worst-case execution time (WCET) analysis techniques. MBPTA computes tight WCET bounds expressed as probabilistic exceedance functions, without needing much information on the hardware and software internals of the system. Classic WCET analysis has information needs that may be costly and difficult to satisfy, and their omission increases pessimism. Previous work has shown that MBPTA does well with benchmark programs. Real-world applications however place more demanding requirements on timing analysis than simple benchmarks. It is interesting to see how PTA responds to them. This paper discusses the application of MBPTA to a real avionics system and presents lessons learned in that process.


digital systems design | 2013

parMERASA -- Multi-core Execution of Parallelised Hard Real-Time Applications Supporting Analysability

Theo Ungerer; Christian Bradatsch; Mike Gerdes; Florian Kluge; Ralf Jahr; Jörg Mische; Joao Fernandes; Pavel G. Zaykov; Zlatko Petrov; Bert Böddeker; Sebastian Kehr; Hans Regler; Andreas Hugl; Christine Rochange; Haluk Ozaktas; Hugues Cassé; Armelle Bonenfant; Pascal Sainrat; Ian Broster; Nick Lay; David George; Eduardo Quiñones; Miloš Panić; Jaume Abella; Francisco J. Cazorla; Sascha Uhrig; Mathias Rohde; Arthur Pyka

Engineers who design hard real-time embedded systems express a need for several times the performance available today while keeping safety as major criterion. A breakthrough in performance is expected by parallelizing hard real-time applications and running them on an embedded multi-core processor, which enables combining the requirements for high-performance with timing-predictable execution. parMERASA will provide a timing analyzable system of parallel hard real-time applications running on a scalable multicore processor. parMERASA goes one step beyond mixed criticality demands: It targets future complex control algorithms by parallelizing hard real-time programs to run on predictable multi-/many-core processors. We aim to achieve a breakthrough in techniques for parallelization of industrial hard real-time programs, provide hard real-time support in system software, WCET analysis and verification tools for multi-cores, and techniques for predictable multi-core designs with up to 64 cores.


worst case execution time analysis | 2013

Upper-bounding Program Execution Time with Extreme Value Theory

Francisco J. Cazorla; Tullio Vardanega; Eduardo Quiñones; Jaume Abella

In this paper we discuss the limitations of and the precautions to account for when using Extreme Value Theory (EVT) to compute upper bounds to the execution time of programs. We analyse the requirements placed by EVT on the observations to be made of the events of interest, and the conditions that render safe the computations of execution time upper bounds. We also study the requirements that a recent EVT-based timing analysis technique, Measurement-Based Probabilistic Timing Analysis (MBPTA), introduces, besides those imposed by EVT, on the computing system under analysis to increase the trustworthiness of the upper bounds that it computes.


euromicro conference on real-time systems | 2014

On the Comparison of Deterministic and Probabilistic WCET Estimation Techniques

Jaume Abella; Damien Hardy; Isabelle Puaut; Eduardo Quiñones; Francisco J. Cazorla

Timing validation is a critical step in the design of real-time systems, that requires the estimation of Worst-Case Execution Times (WCET) for tasks. A number of different methods have been proposed, such as Static Deterministic Timing Analysis (SDTA). The advent of Probabilistic Timing Analysis, both Measurement-Based (MBPTA) and Static Probabilistic Timing Analyses (SPTA), offers different design points between the tightness of WCET estimates, hardware that can be analyzed and the information needed from the user to carry out the analysis. The lack of comparison among those techniques makes complex the selection of the most appropriate one for a given system. This paper makes a first attempt towards comparing comprehensively SDTA, SPTA and MBPTA, qualitatively and quantitatively, under different cache configurations implementing LRU and random replacement. We identify strengths and limitations of each technique depending on the characteristics of the program under analysis and the hardware platform, thus providing users with guidance on which approach to choose depending on their target application and hardware platform.


international symposium on industrial embedded systems | 2015

WCET analysis methods: Pitfalls and challenges on their trustworthiness

Jaume Abella; Carles Hernandez; Eduardo Quiñones; Francisco J. Cazorla; Philippa Ryan Conmy; Mikel Azkarate-Askasua; Jon Perez; Enrico Mezzetti; Tullio Vardanega

In the last three decades a number of methods have been devised to find upper-bounds for the execution time of critical tasks in time-critical systems. Most of such methods aim to compute Worst-Case Execution Time (WCET) estimates, which can be used as trustworthy upper-bounds for the execution time that the analysed programs will ever take during operation. The range of analysis approaches used include static, measurement-based and probabilistic methods, as well as hybrid combinations of them. Each of those approaches delivers its results on the assumption that certain hypotheses hold on the timing behaviour of the system as well that the user is able to provide the needed input information. Often enough the trustworthiness of those methods is only adjudged on the basis of the soundness of the method itself. However, trustworthiness rests a great deal also on the viability of the assumptions that the method makes on the system and on the users ability, and on the extent to which those assumptions hold in practice. This paper discusses the hypotheses on which the major state-of-the-art timing analyses methods rely, identifying pitfalls and challenges that cause uncertainty and reduce confidence on the computed WCET estimates. While identifying weaknesses, this paper does not wish to discredit any method but rather to increase awareness on their limitations and enable an informed selection of the technique that best fits the user needs.

Collaboration


Dive into the Jaume Abella's collaboration.

Top Co-Authors

Avatar

Francisco J. Cazorla

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Eduardo Quiñones

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Antonio González

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Carles Hernandez

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Xavier Vera

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonidas Kosmidis

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Xavier Vera

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Oguz Ergin

TOBB University of Economics and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge