Benjamin Lesage
University of York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benjamin Lesage.
real-time networks and systems | 2014
David Griffin; Benjamin Lesage; Alan Burns; Robert I. Davis
The analysis of random replacement caches is an area that has recently attracted considerable attention in the field of probabilistic real-time systems. A major problem with performing static analysis on such a cache is that the relatively large number of successor states on a cache miss (equal to the cache associativity) renders approaches such as Collecting Semantics intractable. Other approaches must contend with non-trivial behaviours, such as the non-independence of accesses to the cache, which tends to lead to overly pessimistic or computationally expensive analyses. Utilising techniques from the field of Lossy Compression, where compactly representing large volumes of data without losing valuable data is the norm, this paper outlines a technique for applying compression to the Collecting Semantics of a Random Replacement Cache. This yields a Must and May analysis. Experimental evaluation shows that, with appropriate parameters, this technique is more accurate and significantly faster than current state-of-the-art techniques.
real-time systems symposium | 2015
Benjamin Lesage; David Griffin; Sebastian Altmeyer; Robert I. Davis
This paper introduces an effective Static Probabilistic Timing Analysis (SPTA) for multi-path programs. The analysis estimates the temporal contribution of an evict-on-miss, random replacement cache to the probabilistic Worst-Case Execution Time (pWCET) distribution of multi-path programs. The analysis uses a conservative join function that provides a proper overapproximation of the possible cache contents and the pWCET distribution on path convergence, irrespective of the actual path followed during execution. Simple program transformations are introduced that reduce the impact of path indeterminism while ensuring sound pWCET estimates. Evaluation shows that the proposed method is efficient at capturing locality in the cache, and substantially outperforms the only prior approach to SPTA for multi-path programs based on path merging. The evaluation results show incomparability with analysis for an equivalent deterministic system using an LRU cache.
real-time networks and systems | 2015
Benjamin Lesage; David Griffin; Frank Soboczenski; Iain Bate; Robert I. Davis
A key issue with Worst-Case Execution Time (WCET) analyses is the evaluation of the tightness and soundness of the results produced. In the absence of a ground truth, i.e. the Actual WCET (AWCET), such evaluations rely on comparison between different estimates or observed values. In this paper, we introduce a framework for the evaluation of measurement-based timing analyses. This framework uses abstract models of synthetic tasks to provide realistic execution time data as input to the analyses, while ensuring that a corresponding AWCET can be computed. The effectiveness of the framework is demonstrated by evaluating the impact of imperfect structural coverage on an existing measurement-based probabilistic timing analysis.
real-time networks and systems | 2014
David Griffin; Benjamin Lesage; Alan Burns; Robert I. Davis
This paper outlines how Lossy Compression, a branch of Information Theory relating to the compact representation of data while retaining important information, can be applied to the Worst Case Execution Time analysis problem. In particular, we show that by applying lossy compression to the data structures involved in the collecting semantics of a given component, for example a PLRU cache, a useful analysis can be derived. While such an analysis could be found via other means, the application of Lossy Compression provides a formal method and eases the process of discovering the analysis. Further, as the compression and its application are formally specified, such an analysis can be made correct-by-construction rather than relying on an after-the-fact proof.
Real-time Systems | 2018
Benjamin Lesage; David Griffin; Sebastian Altmeyer; Liliana Cucu-Grosjean; Robert I. Davis
Probabilistic hard real-time systems, based on hardware architectures that use a random replacement cache, provide a potential means of reducing the hardware over-provision required to accommodate pathological scenarios and the associated extremely rare, but excessively long, worst-case execution times that can occur in deterministic systems. Timing analysis for probabilistic hard real-time systems requires the provision of probabilistic worst-case execution time (pWCET) estimates. The pWCET distribution can be described as an exceedance function which gives an upper bound on the probability that the execution time of a task will exceed any given execution time budget on any particular run. This paper introduces a more effective static probabilistic timing analysis (SPTA) for multi-path programs. The analysis estimates the temporal contribution of an evict-on-miss, random replacement cache to the pWCET distribution of multi-path programs. The analysis uses a conservative join function that provides a proper over-approximation of the possible cache contents and the pWCET distribution on path convergence, irrespective of the actual path followed during execution. Simple program transformations are introduced that reduce the impact of path indeterminism while ensuring sound pWCET estimates. Evaluation shows that the proposed method is efficient at capturing locality in the cache, and substantially outperforms the only prior approach to SPTA for multi-path programs based on path merging. The evaluation results show incomparability with analysis for an equivalent deterministic system using an LRU cache. For some benchmarks the performance of LRU is better, while for others, the new analysis techniques show that random replacement has provably better performance.
real-time networks and systems | 2015
David Griffin; Benjamin Lesage; Iain Bate; Frank Soboczenski; Robert I. Davis
Given that real-time systems are specified to a degree of confidence, budget overruns should be expected to occur in a system at some point. When a budget overrun occurs, it is necessary to understand how long such a state persists, in order to determine if the fault tolerance of the system is adequate to handle the problem. However, given the rarity of budget overruns in testing, it cannot be assumed that sufficient data will be available to build an accurate model. Hence this paper presents a new application of Markov Chain based modelling techniques combined with forecasting techniques to determine an appropriate fault model, using Lossy Compression to fit the model to the available data. In addition, a new algorithm, DepET, for generating job execution times with dependencies is given for use in task simulators.
real time networks and systems | 2018
Benjamin Lesage; Stephen Law; Iain Bate
Timing analysis is an important part of the development of critical real-time systems. It stems from the need to provide evidence on the behaviour of the system, compliance to requirements and timing bounds. The formal testing process is complicated, and includes tests to achieve compliance with certification requirements. Where possible, testing should be performed on a host and then validated on the target. This is especially important for real systems where the target may not be available early in the project or target-based testing is expensive and time consuming. Meaningful host-based testing is difficult when it comes to timing analysis. Automation helps reduce the costs and move testing earlier in the application development cycle. Moving testing earlier in the development cycle not only enables the testing to scale to whole systems, it allows the risks of projects to be managed and software to be optimised before target-based testing is performed. In this paper, we extend existing work achieving reliable coverage and High WaterMark (HWM) measurement, to scale its application to the analysis of a full system software build, automate the test process, and minimise the set of tests deployed on target. Our case study demonstrates the successful application of the approach on a large code base, i.e. an existing controls system software code. The paper ends with a position statement about how this work is instrumental for both future research but also as part of industry practically analysing the timing behaviour of systems automatically and certifying mixed-criticality systems.
real-time networks and systems | 2017
David Griffin; Benjamin Lesage; Iain Bate; Frank Soboczenski; Robert I. Davis
While there is significant interest in the use of COTS multicore platforms for Real-time Systems, there has been very little in terms of practical methods to calculate the interference multiplier (i.e. the increase in execution time due to interference) between tasks on such systems. COTS multicore platforms present two distinct challenges: firstly, the variable interference between tasks competing for shared resource such as cache, and secondly the complexity of the hardware mechanisms and policies used, which may result in a system which is very difficult if not impossible to analyse - assuming that the exact details of the hardware are even disclosed! This paper proposes a new technique, Forecast-Based Interference analysis, which mitigates both of these issues by combining measurement based techniques with statistical techniques and forecast modelling to enable the prediction of an interference multiplier for a given set of tasks, in an automated and reliable manner. The combination of execution times and interference multipliers can be used both in the design (e.g. for specifying timing watchdogs) and analysis (e.g. schedulability analysis) of systems.
Archive | 2014
Sebastian Altmeyer; Liliana Cucu-Grosjean; Robert I. Davis; Benjamin Lesage
Automotive - Safety & Security | 2017
Benjamin Lesage; David Griffin; Iain Bate; Frank Soboczenski