Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sidney Dekker is active.

Publication


Featured researches published by Sidney Dekker.


Australian Psychologist | 1995

The effects of job insecurity on psychological health and withdrawal: A longitudinal study

Sidney Dekker; Wilmar B. Schaufeli

Abstract This paper reports on a repeated measures study of job insecurity conducted during drastic organisational change in one of Australias large public transport organisations. In a redundant group (n = 32) and a control group (n = 63), effects of job insecurity and the availability of coping resources on psychological health and withdrawal were examined longitudinally by means of self-report questionnaires. Results indicate that job insecurity is associated with a deterioration of psychological health (i.e. leading to psychological distress and burnout), as well as job and organisational withdrawal. Contrary to expectations, however, neither support from colleagues nor management nor unions seemed to protect job incumbents from the negative effects of job insecurity. Apparently, these three sources of potential support do not have a stress-buffering effect. It was concluded that in order to combat the adverse effects of job insecurity on psychological health and morale, the job stressor itself has t...


Journal of Safety Research | 2002

RECONSTRUCTING HUMAN CONTRIBUTIONS TO ACCIDENTS: THE NEW VIEW ON ERROR AND PERFORMANCE

Sidney Dekker

PROBLEM How can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence. METHOD AND RESULTS This paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct peoples unfolding mindsets. IMPACT ON INDUSTRY In an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how peoples actions and assessments actually made sense at the time.


Cognition, Technology & Work | 2002

MABA-MABA or Abracadabra? Progress on human-automation co-ordination

Sidney Dekker; David D. Woods

Abstract: In this paper we argue that substitution-based function allocation methods (such as MABA-MABA, or Men-Are-Better-At/Machines-Are-Better-At lists) cannot provide progress on human–automation co-ordination. Quantitative ‘who does what’ allocation does not work because the real effects of automation are qualitative: it transforms human practice and forces people to adapt their skills and routines. Rather than re-inventing or refining substitution-based methods, we propose that the more pressing question on human–automation co-ordination is ‘How do we make them get along together?’


Cognition, Technology & Work | 2004

Human factors and folk models

Sidney Dekker; Erik Hollnagel

This paper presents a discussion of the susceptibility of human factors to the use of folk models. The case of automation-induced complacency is used as a guiding example to illustrate how folk models (1) substitute one label for another rather than decomposing a large construct into more measurable specifics; (2) are immune to falsification and so resist the most important scientific quality check; and (3) easily get overgeneralised to situations they were never meant to speak about. We then discuss the link between models and measurements, where the model constrains what can be measured by describing what is essential performance, and where the model’s parameters become the basis for specifying the measurements. We propose that one way forward for human factors is to de-emphasize the focus on inferred and uncertain states of the mind, and shift to characteristics of human performance instead.


Journal of Navigation | 2002

ON YOUR WATCH: AUTOMATION ON THE BRIDGE.

Margareta Lützhöft; Sidney Dekker

Several recent maritime accidents suggest that modern technology sometimes can make it difficult for mariners to navigate safely. A review of the literature also indicates that the technological remedies designed to prevent maritime accidents at times can be ineffective or counterproductive. To understand why, problem-oriented ethnography was used to collect and analyse data on how mariners understand their work and their tools. Over 4 years, 15 ships were visited; the ship types studied were small and large archipelago passenger ships and cargo ships. Mariners and others who work in the maritime industry were interviewed. What I found onboard were numerous examples of what I now call integration work. Integration is about co-ordination, co-operation and compromise. When humans and technology have to work together, the human (mostly) has to co-ordinate resources, co-operate with devices and compromise between means and ends. What mariners have to integrate to get work done include representations of data and information; rules, regulations and practice; human and machine work; and learning and practice. Mariners largely have to perform integration work themselves because machines cannot communicate in ways mariners see as useful. What developers and manufacturers choose to integrate into screens or systems is not always what the mariners would choose. There are other kinds of ‘mistakes’ mariners have to adapt to. Basically, they arise from conflicts between global rationality (rules, regulations and legislation) and local rationality (what gets defined as good seamanship at a particular time and place). When technology is used to replace human work this is not necessarily a straightforward or successful process. What it often means is that mariners have to work, sometimes very hard, to ‘construct’ a cooperational human-machine system. Even when technology works ‘as intended’ work of this kind is still required. Even in most ostensibly integrated systems, human operators still must perform integration work. In short, technology alone cannot solve the problems that technology created. Further, trying to fix ‘human error’ by incremental ‘improvements’ in technology or procedure tends to be largely ineffective due to the adaptive compensation by users. A systems view is necessary to make changes to a workplace. Finally, this research illustrates the value problem-oriented ethnography can have when it comes to collecting information on what users ‘mean’ and ‘really do’ and what designers ‘need’ to make technology easier and safer to use.


Cognition, Technology & Work | 1999

To Intervene or not to Intervene: The Dilemma of Management by Exception

Sidney Dekker; David D. Woods

Abstract: Future air traffic management architectures propose to give aircraft more flight path autonomy and turn the air traffic controller into a manager of exceptions. This article reports on one experiment in a series of studies that empirically explored the cognitive work underlying management by exception in air traffic control. Active practitioners (controllers, pilots, dispatchers) were prepared on the rules of the envisioned system and presented with a series of future incidents, each of which they were required to jointly resolve. Management by exception turns out to trap human controllers in a double bind, where intervening early seems appealing but is difficult to justify (airspace throughput) and carry out (controller workload problems). Late interventions are just as difficult, since controllers will have to take over in the middle of a potentially challenging or deteriorating situation. Computerised decision support that flags exceptions migrates the decision criterion into a device, creating a threshold crossing that is typically set either too early or too late. This article lays out the intertwined trade-offs and dilemmas for the exception manager, and makes recommendations for cooperative human–machine architectures in future air traffic management.


Applied Ergonomics | 2009

Predicting pilot error: testing a new methodology and a multi-methods and analysts approach.

Neville A. Stanton; Paul M. Salmon; Deneen Harris; Andre Marshall; Jason Demagalski; Mark S. Young; Thomas Waldmann; Sidney Dekker

The Human Error Template (HET) is a recently developed methodology for predicting design-induced pilot error. This article describes a validation study undertaken to compare the performance of HET against three contemporary Human Error Identification (HEI) approaches when used to predict pilot errors for an approach and landing task and also to compare analyst error predictions to an approach to enhancing error prediction sensitivity: the multiple analysts and methods approach, whereby multiple analyst predictions using a range of HEI techniques are pooled. The findings indicate that, of the four methodologies used in isolation, analysts using the HET methodology offered the most accurate error predictions, and also that the multiple analysts and methods approach was more successful overall in terms of error prediction sensitivity than the three other methods but not the HET approach. The results suggest that when predicting design-induced error, it is appropriate to use a toolkit of different HEI approaches and multiple analysts in order to heighten error prediction sensitivity.


The International Journal of Aviation Psychology | 2003

Illusions of Explanation:A Critical Essay on Error Classification

Sidney Dekker

Error classification methods are used throughout aviation to help understand and mitigate the causes of human error. However, many assumptions underlying error classification remain untested. For example, error is taken to mean different things, even within individual methods, and a close mapping is uncritically presumed between the quantity measured (errors)and the quality managed (safety). Further, error classifications can deepen investigative biases by merely relabeling error rather than explaining it. This article reviews such assumptions and proposes alternative solutions.


Theoretical Issues in Ergonomics Science | 2009

Fidelity and validity of simulator training

Nicklas Dahlström; Sidney Dekker; R. van Winsen; James M. Nyce

Through a case study, this article explores a number of theoretical issues related to the often taken for granted relationship between simulator fidelity and the quality and transferability of training in complex, dynamic, safety-critical settings. A counterexample based on mid-fidelity simulation is presented and the assumed coincidence of fidelity and validity is tested, that is the study tests the equation of constructed photorealism (built to mimic reality) and effective development of the competence that operators require to manage situations that involve underspecified problems, time pressure constraints and complex group interaction. The article concludes that such competence development cannot rely only on highly context-specific (photorealistic) environments. Further, it will be argued that lower-fidelity simulation, when appropriately designed, can provide competence development with pedagogical and economic advantages.


Cognition, Technology & Work | 2009

Just culture: who gets to draw the line?

Sidney Dekker

A just culture is meant to balance learning from incidents with accountability for their consequences. All the current proposals for just cultures argue for a clear line between acceptable and unacceptable behavior. This alone, however, cannot promote just culture as it falsely assumes that culpability inheres in the act, bearing immutable features independent of context, language or interpretation. The critical question is not where to draw the line, but who gets to draw it. Culpability is socially constructed: the result of deploying one language to describe an incident, and of enacting particular post-conditions. Different accounts of the same incident are always possible (e.g. educational, organizational, political). They generate different repertoires of countermeasures and can be more constructive for safety. The issue is not to exonerate individual practitioners but rather what kind of accountability promotes justice and safety: backward-looking and retributive, or forward-looking and change-oriented.

Collaboration


Dive into the Sidney Dekker's collaboration.

Researchain Logo
Decentralizing Knowledge