Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Polet is active.

Publication


Featured researches published by Philippe Polet.


Safety Science | 2003

Modelling border-line tolerated conditions of use (BTCU) and associated risks

Philippe Polet; Frédéric Vanderhaegen; René Amalberti

For the design of most technical systems a desirable safe field of use is calculated from systems technical constraints, and expectations of human capacities and limitations. Performance incursions outside the safe field are then limited by means of hard-protections, instructions, education, and regulations. However, once in service, the socio-technical conditions of work create conditions for performance to migrate and stabilise outside the expected safe field of use. The stabilisation of migration results from a compromise between global performance improvement, individual additional advantages, and apparent risk control. This paper proposes a double modelling approach to such migrations, first in terms of a cognitive model of the production of migrations, and second in terms of a mathematical safety analysis of severity and consequences. Both approaches lead to the emergence of methodologies in order to take BTCU into account during design. Conclusions highlight the impossibility of avoiding such in service migrations of use, and advocate for an early consideration of potential migrations in order to improve the robustness of safety analysis techniques. The field example chosen for demonstration is the design and use of a rotary press.


Information Sciences | 2011

Using adjustable autonomy and human-machine cooperation to make a human-machine system resilient - Application to a ground robotic system

Stéphane Zieba; Philippe Polet; Frédéric Vanderhaegen

This study concerns autonomous ground vehicles performing missions of observation or surveillance. These missions are accomplished under the supervision of human operators, who can also remotely control the unmanned vehicle. This kind of human-machine system is likely to face perturbations in a dynamic natural environment. However, human operators are not able to manage perturbations due to overload. The objective of this study is to provide such systems with ways to anticipate, react and recover from perturbations. In other words, these works aim at improving system resilience so that it can better manage perturbations. This paper presents a model of human-robot cooperative control that helps to improve the resilience of the human-machine system by making the level of autonomy adjustable. A formalism of agent autonomy is proposed according to the semantic aspects of autonomy and the agents activity levels. This formalism is then used to describe the activity levels of the global human-machine system. Hierarchical decision-making methods and planning algorithms are also proposed to implement these levels of activity. Finally, an experimental illustration on a micro-world is presented in order to evaluate the feasibility and application of the proposed model.


Cognition, Technology & Work | 2002

Theory of Safety-Related Violations of System Barriers

Philippe Polet; Frédéric Vanderhaegen; Peter A. Wieringa

Abstract:This paper focuses on a theory of the safety-related violations that occur in practice during normal operational conditions, but that are not taken into account during risk analysis. The safety-related violations are so-called barrier crossings. A barrier crossing is associated to an operational risk which constitutes a combination of costs: the cost of crossing the barrier, the benefit (negative cost) immediately after crossing the barrier, and a possible deficit (extreme cost) due to the exposure to hazardous conditions that are created after the barrier has been crossed. A utility function is discussed which describes the consequence-driven behaviour and uses an assessment of these costs functions. An industrial case study illustrates the application of the proposed theory.


Reliability Engineering & System Safety | 2011

A Benefit/Cost/Deficit (BCD) model for learning from human errors

Frédéric Vanderhaegen; Stéphane Zieba; Simon Enjalbert; Philippe Polet

This paper proposes an original model for interpreting human errors, mainly violations, in terms of benefits, costs and potential deficits. This BCD model is then used as an input framework to learn from human errors, and two systems based on this model are developed: a case-based reasoning system and an artificial neural network system. These systems are used to predict a specific human car driving violation: not respecting the priority-to-the-right rule, which is a decision to remove a barrier. Both prediction systems learn from previous violation occurrences, using the BCD model and four criteria: safety, for identifying the deficit or the danger; and opportunity for action, driver comfort, and time spent; for identifying the benefits or the costs. The application of learning systems to predict car driving violations gives a rate over 80% of correct prediction after 10 iterations. These results are validated for the non-respect of priority-to-the-right rule.


Engineering Applications of Artificial Intelligence | 2009

A reinforced iterative formalism to learn from human errors and uncertainty

Frédéric Vanderhaegen; Philippe Polet; Stéphane Zieba

This paper proposes a reinforced iterative formalism to learn from intentional human errors called barrier removal and from uncertainty on human-error parameters. Barrier removal consists in misusing a safety barrier that human operators are supposed to respect. The iterative learning formalism is based on human action formalism that interprets the barrier removal in terms of consequences, i.e. benefits, costs and potential dangers or deficits. Two functions are required: the similarity function to search a known case closed to the input case for which the human action has to be predicted and a reinforcement function to reinforce links between similar known cases. This reinforced iterative formalism is applied to a railway simulation from which the prediction of barrier removal is based on subjective data.


Cognition, Technology & Work | 2010

Principles of adjustable autonomy: a framework for resilient human–machine cooperation

Stéphane Zieba; Philippe Polet; Frédéric Vanderhaegen; Serge Debernard

Unmanned ground vehicles tend to be more and more autonomous, but both complete teleoperation and full autonomy are not efficient enough to deal with all possible situations. To be efficient, the human–robot system must be able to anticipate, react and recover from errors of different kinds, i.e., to be resilient. From this observation, this paper proposes a survey on the resilience of a human–machine system and the means to control the resilience. The resilience of a system can be defined as the ability to maintain or recover a stable state when subject to disturbance. Adjustable autonomy and human–machine cooperation are considered as means of resilience for the system. This paper then proposes three indicators to assess different meanings of resilience of the system: foresight and avoidance of events, reaction to events and recovery from occurrence of events. The third of these metrics takes into consideration the concept of affordances that allows a common representation for the opportunities of action between the automated system and its environment.


Reliability Engineering & System Safety | 2004

Artificial neural network for violation analysis

Zhicheng Zhang; Philippe Polet; Frédéric Vanderhaegen; Patrick Millot

Abstract Barrier removal (BR) is a safety-related violation, and it can be analyzed in terms of benefits, costs, and potential deficits. In order to allow designers to integrate BR into the risk analysis during the initial design phase or during re-design work, we propose a connectionist method integrating self-organizing maps (SOM). The basic SOM is an artificial neural network that, on the basis of the information contained in a multi-dimensional space, generates a space of lesser dimensions. Three algorithms—Unsupervised SOM, Supervised SOM, and Hierarchical SOM—have been developed to permit BR classification and prediction in terms of the different criteria. The proposed method can be used, on the one hand, to foresee/predict the possibility level of a new/changed barrier (prospective analysis), and on the other hand, to synthetically regroup/rearrange the BR of a given human–machine system (retrospective analysis). We applied this method to the BR analysis of an experimental railway simulator, and our preliminary results are presented here.


International Journal of Adaptive and Innovative Systems | 2009

Resilience of a human-robot system using adjustable autonomy and human-robot collaborative control

Stéphane Zieba; Philippe Polet; Frédéric Vanderhaegen; Serge Debernard

Unmanned ground vehicles tend to be more and more autonomous. Nowadays, both complete teleoperation and full autonomy are not efficient enough to deal with all possible situations. To be efficient, the human-robot system must be able to anticipate, react, recover and even learn from errors of different kinds, i.e., to be resilient. Adjustable autonomy is a way to react to unplanned events and to optimise the task allocation between the human operator and the robot. It thus can be seen as a component of the resilience of a system which can be defined as the ability to maintain or recover a stable state when subject to disturbance. In this paper, adjustable autonomy and human-robot cooperation are considered as means to control the resilience. This paper then proposes an approach to design a resilient human-robot system through some defined criteria which aim at assessing the transitions of the modes of autonomy. Perspectives of this approach intend to provide metrics for the adjustment of autonomy in the most resilient way. First results from experiments achieved on a micro-world aim at a preliminary assessment of the different meanings of resilience of the system using the proposed metrics.


Engineering Applications of Artificial Intelligence | 2012

Iterative learning control based tools to learn from human error

Philippe Polet; Frédéric Vanderhaegen; Stéphane Zieba

This paper proposes a new alternative to identify and predict intentional human errors based on benefits, costs and deficits (BCD) associated to particular human deviations. It is based on an iterative learning system. Two approaches are proposed. These approaches consist in predicting barrier removal, i.e., non-respect of rules, achieved by human operators and in using the developed iterative learning system to learn from barrier removal behaviours. The first approach reinforces the parameters of a utility function associated to the respect of this rule. This reinforcement affects directly the output of the predictive tool. The second approach reinforces the knowledge of the learning tool stored into its database. Data from an experimental study related to driving situation in car simulator have been used for both tools in order to predict the behaviour of drivers. The two predictive tools make predictions from subjective data coming from drivers. These subjective data concern the subjective evaluation of BCD related to the respect of the right priority rule.


Advances in Human-computer Interaction | 2009

Human behaviour analysis of barrier deviations using a benefit-cost-deficit model

Philippe Polet; Frédéric Vanderhaegen; Patrick Millot

A Benefit-Cost-Deficit (BCD) model is proposed for analyzing such intentional human errors as barrier removal, the deliberate nonrespect of the rules and instructions governing use of a given system. The proposed BCD model attempts to explain and predict barrier removal in terms of the benefits, costs, and potential deficits associated with this human behaviour. The results of an experimental study conducted on a railway simulator (TRANSPAL) are used to illustrate the advantages of the BCD model. In this study, human operators were faced with barriers that they could choose to deactivate, or not. Their decisions were analyzed in an attempt to explain and predict their choices. The analysis highlights that operators make their decisions using a balance between several criteria. Though barriers are safety-related elements, the decision to remove them is not guided only by the safety criterion; it is also motivated by such criteria as productivity, workload and quality. Results of prediction supported by the BCD demonstrate the predictability of barrier violation.

Collaboration


Dive into the Philippe Polet's collaboration.

Top Co-Authors

Avatar

Frédéric Vanderhaegen

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Millot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Serge Debernard

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Frédéric Vanderhaegen

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Karima Sedki

University of Valenciennes and Hainaut-Cambresis

View shared research outputs
Top Co-Authors

Avatar

Peter A. Wieringa

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Denis Berdjag

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Pierre Mayenobe

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge