Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frédéric Vanderhaegen is active.

Publication


Featured researches published by Frédéric Vanderhaegen.


Safety Science | 2003

Modelling border-line tolerated conditions of use (BTCU) and associated risks

Philippe Polet; Frédéric Vanderhaegen; René Amalberti

For the design of most technical systems a desirable safe field of use is calculated from systems technical constraints, and expectations of human capacities and limitations. Performance incursions outside the safe field are then limited by means of hard-protections, instructions, education, and regulations. However, once in service, the socio-technical conditions of work create conditions for performance to migrate and stabilise outside the expected safe field of use. The stabilisation of migration results from a compromise between global performance improvement, individual additional advantages, and apparent risk control. This paper proposes a double modelling approach to such migrations, first in terms of a cognitive model of the production of migrations, and second in terms of a mathematical safety analysis of severity and consequences. Both approaches lead to the emergence of methodologies in order to take BTCU into account during design. Conclusions highlight the impossibility of avoiding such in service migrations of use, and advocate for an early consideration of potential migrations in order to improve the robustness of safety analysis techniques. The field example chosen for demonstration is the design and use of a rotary press.


Information Sciences | 2011

Using adjustable autonomy and human-machine cooperation to make a human-machine system resilient - Application to a ground robotic system

Stéphane Zieba; Philippe Polet; Frédéric Vanderhaegen

This study concerns autonomous ground vehicles performing missions of observation or surveillance. These missions are accomplished under the supervision of human operators, who can also remotely control the unmanned vehicle. This kind of human-machine system is likely to face perturbations in a dynamic natural environment. However, human operators are not able to manage perturbations due to overload. The objective of this study is to provide such systems with ways to anticipate, react and recover from perturbations. In other words, these works aim at improving system resilience so that it can better manage perturbations. This paper presents a model of human-robot cooperative control that helps to improve the resilience of the human-machine system by making the level of autonomy adjustable. A formalism of agent autonomy is proposed according to the semantic aspects of autonomy and the agents activity levels. This formalism is then used to describe the activity levels of the global human-machine system. Hierarchical decision-making methods and planning algorithms are also proposed to implement these levels of activity. Finally, an experimental illustration on a micro-world is presented in order to evaluate the feasibility and application of the proposed model.


Cognition, Technology & Work | 2002

Theory of Safety-Related Violations of System Barriers

Philippe Polet; Frédéric Vanderhaegen; Peter A. Wieringa

Abstract:This paper focuses on a theory of the safety-related violations that occur in practice during normal operational conditions, but that are not taken into account during risk analysis. The safety-related violations are so-called barrier crossings. A barrier crossing is associated to an operational risk which constitutes a combination of costs: the cost of crossing the barrier, the benefit (negative cost) immediately after crossing the barrier, and a possible deficit (extreme cost) due to the exposure to hazardous conditions that are created after the barrier has been crossed. A utility function is discussed which describes the consequence-driven behaviour and uses an assessment of these costs functions. An industrial case study illustrates the application of the proposed theory.


Reliability Engineering & System Safety | 2011

A Benefit/Cost/Deficit (BCD) model for learning from human errors

Frédéric Vanderhaegen; Stéphane Zieba; Simon Enjalbert; Philippe Polet

This paper proposes an original model for interpreting human errors, mainly violations, in terms of benefits, costs and potential deficits. This BCD model is then used as an input framework to learn from human errors, and two systems based on this model are developed: a case-based reasoning system and an artificial neural network system. These systems are used to predict a specific human car driving violation: not respecting the priority-to-the-right rule, which is a decision to remove a barrier. Both prediction systems learn from previous violation occurrences, using the BCD model and four criteria: safety, for identifying the deficit or the danger; and opportunity for action, driver comfort, and time spent; for identifying the benefits or the costs. The application of learning systems to predict car driving violations gives a rate over 80% of correct prediction after 10 iterations. These results are validated for the non-respect of priority-to-the-right rule.


Engineering Applications of Artificial Intelligence | 2009

A reinforced iterative formalism to learn from human errors and uncertainty

Frédéric Vanderhaegen; Philippe Polet; Stéphane Zieba

This paper proposes a reinforced iterative formalism to learn from intentional human errors called barrier removal and from uncertainty on human-error parameters. Barrier removal consists in misusing a safety barrier that human operators are supposed to respect. The iterative learning formalism is based on human action formalism that interprets the barrier removal in terms of consequences, i.e. benefits, costs and potential dangers or deficits. Two functions are required: the similarity function to search a known case closed to the input case for which the human action has to be predicted and a reinforcement function to reinforce links between similar known cases. This reinforced iterative formalism is applied to a railway simulation from which the prediction of barrier removal is based on subjective data.


Reliability Engineering & System Safety | 2001

A non-probabilistic prospective and retrospective human reliability analysis method — application to railway system

Frédéric Vanderhaegen

Abstract The paper describes a method to analyze human reliability. It defines human reliability as a degradation function related to deviations of both human behavioral state and system state due to this behavior. The method is called ACIH, a French acronym for Analysis of Consequences of Human Unreliability. It is a non-probabilistic approach, which aims at identifying both tolerable and intolerable sets of human behavioral degradations, which may affect the system safety. The corresponding scenarios of degradations are characterized by a behavioral model of unreliability including three main factors: acquisition related factors, problem solving related factors, and action related factors. Both prospective and retrospective analyses are taken into account to specify error prevention tools. They are applied to the railway system.


International Journal of Human-computer Interaction | 1994

Human‐machine cooperation: Toward an activity regulation assistance for different air traffic control levels

Frédéric Vanderhaegen; Igor Crévits; Serge Debernard; Patrick Millot

Our research is based on the air traffic control activity regulation assistance. It aims at integrating the two levels of the air traffic control organization: a tactical level managed by a so‐called radar controller and a strategic one managed by a so‐called organic controller. Concerning the tactical level, our research is directed toward a “horizontal cooperation” that consists in a dynamic allocation of control tasks between a human air traffic controller and an assistance tool. Regarding the other level, it is oriented toward a scheduling module in order to improve the initial allocation policy.


Engineering Applications of Artificial Intelligence | 2013

How to learn from the resilience of Human-Machine Systems?

Kiswendsida Abel Ouedraogo; Simon Enjalbert; Frédéric Vanderhaegen

This paper proposes a functional architecture to learn from resilience. First, it defines the concept of resilience applied to Human-Machine System (HMS) in terms of safety management for perturbations and proposes some indicators to assess this resilience. Local and global indicators for evaluating human-machine resilience are used for several criteria. A multi-criteria resilience approach is then developed in order to monitor the evolution of local and global resilience. The resilience indicators are the possible inputs of a learning system that is capable of producing several outputs, such as predictions of the possible evolutions of the systems resilience and possible alternatives for human operators to control resilience. Our system has a feedback-feedforward architecture and is capable of learning from the resilience indicators. A practical example is explained in detail to illustrate the feasibility of such prediction.


Cognition, Technology & Work | 2012

Cooperation and learning to increase the autonomy of ADAS

Frédéric Vanderhaegen

This paper discusses on the cooperation and the learning processes to increase the autonomy of a human–machine system or an artificial or human agent. The autonomy is defined as the capacity for a system or an agent to fend alone. It is described in terms of competences and the limits of these competences. Cooperation and learning aim then at increasing the competences or managing the system limits. The management of the autonomy is detailed through different structures of cooperation. It concerns the sharing control between systems or between agents in order to recover their limits. Different classes of learning processes are proposed: the mimicry-based approaches, the dysfunction-based ones, and the wait-and-see-based ones. Advanced Driver Assistance Systems (ADAS) are usually designed integrating cooperation characteristics. Two case studies about the use of cooperative ADAS are then proposed. They are hypothetical scenarios that are discussed to introduce possible future ADAS perspective implementing competences such as learning or cooperative learning.


Cognition, Technology & Work | 2006

Principles of cooperation and competition: application to car driver behavior analysis

Frédéric Vanderhaegen; Sébastien Chalmé; Françoise Anceaux; Patrick Millot

This paper aims at presenting and discussing definitions, typologies and models of cooperation or competition between human operators and at trying to apply them to analyze the cooperative and competitive activities of the car drivers. It pays special attention on a so-called Benefit-Cost-Deficit model to analyze cooperation or competition between human operators in terms of both positive and negative consequences. The application of such a model to assess the car drivers’ activities focuses on three interactive organizational levels: the coordination between drivers directed by the Highway Code, the road infrastructure on which these drivers are moving and the traffic flow.

Collaboration


Dive into the Frédéric Vanderhaegen's collaboration.

Top Co-Authors

Avatar

Philippe Polet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Patrick Millot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christophe Kolski

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Denis Berdjag

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge