Jean-Paul Blanquart
Airbus Defence and Space
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean-Paul Blanquart.
ieee international symposium on fault tolerant computing | 1998
Eric Totel; Jean-Paul Blanquart; Yves Deswarte; David Powell
Current safety-critical embedded systems provide support for increasingly diverse and complex tasks, whose levels of criticality can be extremely different. Rather than validating all software to the highest level of confidence, it is more efficient to focus the validation effort on the most critical components. Consequently, it must be ensured that residual design faults in low criticality software cannot corrupt high criticality components. This paper defines an object-oriented integrity policy which ensures that such a property is enforced. Each object is assigned an integrity level related to its criticality. The policy defines rules to access the object methods so that no object can be corrupted by a lower integrity component. Several sorts of objects are accommodated, enabling safety-critical applications to be designed with great flexibility. This is illustrated by a prototype which is implemented on a CORBA-compliant distributed system.
ieee international symposium on fault tolerant computing | 1996
Christophe Rabéjac; Jean-Paul Blanquart; Jean-Pierre Queille
The topic of this paper is the detection of errors due to residual faults in software, particularly those with temporary effects. After positioning our approach amongst existing fault tolerance and detection techniques, we propose detection mechanisms for such errors. These mechanisms are designed to detect both data and control flow errors. They can be validated by both formal and fault-injection techniques. In particular, we propose a timed trace technique allowing one to specify the expected software behavior and to instantiate from this specification a generic control-flow checking automaton. The critical algorithms of this automaton are formally proved. To develop these mechanisms, we also propose a design and validation method based on a monitoring specification. Finally, we apply these techniques on two cases of embedded real-time software in order not only to validate them but also to estimate their efficiency and applicability.
international conference on computer safety reliability and security | 2014
Mathilde Machin; Fanny Dufossé; Jean-Paul Blanquart; Jérémie Guiochet; David Powell; Hélène Waeselynck
Autonomous systems operating in the vicinity of humans are critical in that they potentially harm humans. As the complexity of autonomous system software makes the zero-fault objective hardly attainable, we adopt a fault-tolerance approach. We consider a separate safety channel, called a monitor, that is able to partially observe the system and to trigger safety-ensuring actuations. A systematic process for specifying a safety monitor is presented. Hazards are formally modeled, based on a risk analysis of the monitored system. A model-checker is used to synthesize monitor behavior rules that ensure the safety of the monitored system. Potentially excessive limitation of system functionality due to presence of the safety monitor is addressed through the notion of permissiveness. Tools have been developed to assist the process.
pacific rim international symposium on dependable computing | 2012
Amina Mekki-Mokhtar; Jean-Paul Blanquart; Jérémie Guiochet; David Powell; Matthieu Roy
A systematic process for eliciting safety trigger conditions is presented. Starting from a risk analysis of the monitored system, critical transitions to catastrophic system states are identified and handled in order to specify safety margins on them. The conditions for existence of such safety margins are given and an alternative solution is proposed if no safety margin can be defined. The proposed process is illustrated on a robotic rollator.
systems man and cybernetics | 2018
Mathilde Machin; Jérémie Guiochet; Hélène Waeselynck; Jean-Paul Blanquart; Matthieu Roy; Lola Masson
Safety-critical systems with decisional abilities, such as autonomous robots, are about to enter our everyday life. Nevertheless, confidence in their behavior is still limited, particularly regarding safety. Considering the variety of hazards that can affect these systems, many techniques might be used to increase their safety. Among them, active safety monitors are a means to maintain the system safety in spite of faults or adverse situations. The specification of the safety rules implemented in such devices is of crucial importance, but has been hardly explored so far. In this paper, we propose a complete framework for the generation of these safety rules based on the concept of safety margin. The approach starts from a hazard analysis, and uses formal verification techniques to automatically synthesize the safety rules. It has been successfully applied to an industrial use case, a mobile manipulator robot for co-working.
Distributed Computing | 2012
Josef Widder; Martin Biely; Guenther Gridling; Bettina Weiss; Jean-Paul Blanquart
We consider the problem of reaching agreement in distributed systems in which some processes may deviate from their prescribed behavior before they eventually crash. We call this failure model “mortal Byzantine”. After discussing some application examples where this model is justified, we provide matching upper and lower bounds on the number of faulty processes, and on the required number of rounds in synchronous systems. We then continue our study by varying different system parameters. On the one hand, we consider the failure model under weaker timing assumptions, namely for partially synchronous systems and asynchronous systems with unreliable failure detectors. On the other hand, we vary the failure model in that we limit the occurrences of faulty steps that actually lead to a crash in synchronous systems.
A generic fault-tolerant architecture for real-time dependable systems | 2001
Eric Totel; Ljerka Beus-Dukic; Jean-Paul Blanquart; Yves Deswarte; Vincent Nicomette; David Powell; Andy J. Wellings
The purpose of the multilevel integrity mechanisms of the GUARDS architecture is to protect critical components from the propagation of errors due to residual design faults in less-critical components. The notions of multiple integrity levels and multiple criticality levels are very tightly linked, but there is an important distinction. Integrity levels are associated with an integrity policy that defines what is allowed in terms of data flow between levels and resource utilisation by the components at different levels [Totel 1998]. Criticality levels are defined in terms of the potential consequences of failures of components at each level.
dependable systems and networks | 2016
S. Bourbouse; Jean-Paul Blanquart; J. F. Gajewski; C. Lahorgue
The purpose of this paper is to report on a one-year study granted early 2015 by ESA/ESTEC to Airbus Defence and Space to identify and analyse the main reliability models available for evaluating the failure rate of each EEE component used in space systems, in order to assess their suitability in the space context for developing an improved reliability prediction approach.
dependable systems and networks | 2007
Josef Widder; Günther Gridling; Bettina Weiss; Jean-Paul Blanquart
Archive | 1996
Jean Arlat; Jean-Paul Blanquart; Alain Costes; Yves Crouzet; Yves Deswarte; Jean-Charles Fabre; H. Guillermain; Mohamed Kaaniche; Karama Kanoun; J.-C. Laprie; C. Mazet; David Powell; Christophe Rabéjac; P. Thevenod