Éric Piel
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Éric Piel.
ACM Transactions in Embedded Computing Systems | 2011
Abdoulaye Gamatié; Sébastien Le Beux; Éric Piel; Rabie Ben Atitallah; Anne Etien; Philippe Marquet; Jean-Luc Dekeyser
Modern embedded systems integrate more and more complex functionalities. At the same time, the semiconductor technology advances enable to increase the amount of hardware resources on a chip for the execution. Massively parallel embedded systems specifically deal with the optimized usage of such hardware resources to efficiently execute their functionalities. The design of these systems mainly relies on the following challenging issues: first, how to deal with the parallelism in order to increase the performance; second, how to abstract their implementation details in order to manage their complexity; third, how to refine these abstract representations in order to produce efficient implementations. This article presents the Gaspard design framework for massively parallel embedded systems as a solution to the preceding issues. Gaspard uses the repetitive Model of Computation (MoC), which offers a powerful expression of the regular parallelism available in both system functionality and architecture. Embedded systems are designed at a high abstraction level with the MARTE (Modeling and Analysis of Real-time and Embedded systems) standard profile, in which our repetitive MoC is described by the so-called Repetitive Structure Modeling (RSM) package. Based on the Model-Driven Engineering (MDE) paradigm, MARTE models are refined towards lower abstraction levels, which make possible the design space exploration. By combining all these capabilities, Gaspard allows the designers to automatically generate code for formal verification, simulation and hardware synthesis from high-level specifications of high-performance embedded systems. Its effectiveness is demonstrated with the design of an embedded system for a multimedia application.
international conference on quality software | 2010
Alberto Gonzalez-Sanchez; Éric Piel; Hans-Gerhard Gross; Arjan J. C. van Gemund
Test prioritization techniques select test cases that maximize the confidence on the correctness of the system when the resources for quality assurance (QA) are limited. In the event of a test failing, the fault at the root of the failure has to be localized, adding an extra debugging cost that has to be taken into account as well. However, test suites that are prioritized for failure detection can reduce the amount of useful information for fault localization. This deteriorates the quality of the diagnosis provided, making the subsequent debugging phase more expensive, and defeating the purpose of the test cost minimization. In this paper we introduce a new test case prioritization approach that maximizes the improvement of the diagnostic information per test. Our approach minimizes the loss of diagnostic quality in the prioritized test suite. When considering QA cost as the combination of testing cost and debugging cost, on the Siemens set, the results of our test case prioritization approach show up to a 53% reduction of the overall QA cost, compared with the next best technique.
international conference on software testing, verification and validation workshops | 2009
Alberto González; Éric Piel; Hans-Gerhard Gross
Runtime testing is emerging as the solution for the integration and validation of software systems where traditional development-time integration testing cannot be performed, such as Systems of Systems or Service Oriented Architectures.However, performing tests during deployment or in-service time introduces interference problems, such as undesired side-effects in the state of the system or the outside world.This paper presents a qualitative model of runtime testability that complements Binders classical testability model, and a generic measurement framework for quantitatively assessing the degree of runtime testabilityof a system based on the ratio of what can be tested at runtime vs. what would have been tested during development time.A measurement is devised for the concrete case of architecture-based test coverage, by using a graph model of the systems architecture. Concretely, two testabilitystudies are performed for two component based systems, showing how to measure the runtime testability of a system.
Software - Practice and Experience | 2011
Alberto Gonzalez-Sanchez; Éric Piel; Hans-Gerhard Gross; Arjan J. C. van Gemund
During regression testing, test prioritization techniques select test cases that maximize the confidence on the correctness of the system when the resources for quality assurance (QA) are limited. In the event of a test failing, the fault at the root of the failure has to be localized, adding an extra debugging cost that has to be taken into account as well. However, test suites that are prioritized for failure detection can reduce the amount of useful information for fault localization. This deteriorates the quality of the diagnosis provided, making the subsequent debugging phase more expensive, and defeating the purpose of the test cost minimization. In this paper we introduce a new test case prioritization approach that maximizes the improvement of the diagnostic information per test. Our approach minimizes the loss of diagnostic quality in the prioritized test suite. When considering QA cost as a combination of testing cost and debugging cost, on our benchmark set, the results of our test case prioritization approach show reductions of up to 60% of the overall combined cost of testing and debugging, compared with the next best technique. Copyright
self-adaptive and self-organizing systems | 2011
Éric Piel; Alberto Gonzalez-Sanchez; Hans-Gerhard Gross; Arjan J. C. van Gemund
An essential requirement for the operation of self-adaptive systems is information about their internal health state, i.e., the extent to which the constituent software and hardware components are still operating reliably. Accurate health information enables systems to recover automatically from (intermittent) failures in their components through selective restarting, or self-reconfiguration. This paper explores and assesses the utility of Spectrum-based Fault localisation (SFL) combined with automatic health monitoring for self-adaptive systems. Their applicability is evaluated through simulation of online diagnosis scenarios, and through implementation in an adaptive surveillance system inspired by our industrial partner. The results of the studies performed confirm that the combination of SFL with online monitoring can successfully provide health information and locate problematic components, so that adequate self-* techniques can be deployed.
automated software engineering | 2008
Alberto González; Éric Piel; Hans-Gerhard Gross
Systems-of-systems (SoS) represent a novel kind of system, for which runtime evolution is a key requirement, as components join and leave during runtime. Current component integration and verification techniques are not enough in such a dynamic environment. In this paper we present ATLAS, an architectural framework that enables the runtime integration and verification of a system, based on the built-in test paradigm. ATLAS augments components with two specific interfaces to add and remove tests, and to provide adequate testability features to run these tests. To illustrate our approach, we present a case study of a dynamic reconfiguration scenario of components, in the maritime safety and security domain, using our implementation of ATLAS for the fractal component model. We demonstrate that built-in testing can be extended beyond development-time component integration testing, to support runtime reconfiguration and verification of component-based systems.
international conference on testing software and systems | 2010
Éric Piel; Alberto Gonzalez-Sanchez; Hans-Gerhard Gross
Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.
Proceedings of the 2009 ESEC/FSE workshop on Software integration and evolution @ runtime | 2009
Éric Piel; Alberto Gonzalez-Sanchez
Systems of Systems are large-scale information centric component-based systems. Because they can be more easily expressed as an information flow, they are built following the data-flow paradigm. These systems present high availability requirements that make their runtime evolution necessary. This means that integration and system testing will have to be performed at runtime as well. Already existing techniques for runtime integration and testing are usually focused on component-based systems which follow the client-server paradigm, and are not well suited for data-flow systems. In this paper we present virtual components, a way of defining units of data-flow behaviour that greatly simplifies the definition and maintenance of integration tests when the system evolves at runtime. We present and discuss an example of how to use virtual components for this purpose.
parallel processing and applied mathematics | 2005
Éric Piel; Philippe Marquet; Julien Soula; Jean-Luc Dekeyser
The ARTiS system, a real-time extension of the GNU/Linux scheduler dedicated to SMP (Symmetric Multi-Processors) systems is proposed. ARTiS exploits the SMP architecture to guarantee the preemption of a processor when the system has to schedule a real-time task. The basic idea of ARTiS is to assign a selected set of processors to real-time operations. A migration mechanism of non-preemptible tasks insures a latency level on these real-time processors. Furthermore, specific load-balancing strategies allows ARTiS to benefit from the full power of the SMP systems: the real-time reservation, while guaranteed, is not exclusive and does not imply a waste of resources. ARTiS have been implemented as a modification of the Linux scheduler. This paper details the evaluation of the performance we conduct on this implementation. The level of observed latency shows significant improvements when compared to the standard Linux scheduler.
Situation Awareness with Systems of Systems | 2013
Alberto Gonzalez-Sanchez; Éric Piel; Hans-Gerhard Gross; Arjan J. C. van Gemund
Maritime Safety and Security Systems of Systems (MSS SoS) evolve dynamically during operation, i.e., at runtime. After each runtime evolution, the quality assurance of the integrated system of systems has to be verified again. It is therefore necessary to devise an appropriate verification strategy that not only achieves this goal, but also minimizes the cost, e.g., time, resources, disruption, of checking after each modification. During testing, test prioritization techniques heuristically select test cases to minimize the time to detect the presence of a fault. However, this obviates that once a fault has been detected, it must be localized and isolated/repaired. Test suites prioritized for fault detection can reduce the amount of useful information for fault localization, increasing the cost of fault localization, e.g., with respect to randomly chosen tests. In this chapter we introduce fault localization prioritization and two new test case prioritization heuristics that greatly reduce the cost of fault localization (up to 80%) with almost no increase on the fault detection cost.