Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John A. Forester is active.

Publication


Featured researches published by John A. Forester.


Reliability Engineering & System Safety | 2004

Expert elicitation approach for performing ATHEANA quantification

John A. Forester; Dennis C. Bley; Susan E. Cooper; Erasmia Lois; Nathan Siu; Alan M. Kolaczkowski; John Wreathall

Abstract An expert elicitation approach has been developed to estimate probabilities for unsafe human actions (UAs) based on error-forcing contexts (EFCs) identified through the ATHEANA (A Technique for Human Event Analysis) search process. The expert elicitation approach integrates the knowledge of informed analysts to quantify UAs and treat uncertainty (‘quantification-including-uncertainty’). The analysis focuses on (a) the probabilistic risk assessment (PRA) sequence EFCs for which the UAs are being assessed, (b) the knowledge and experience of analysts (who should include trainers, operations staff, and PRA/human reliability analysis experts), and (c) facilitated translation of information into probabilities useful for PRA purposes. Rather than simply asking the analysts their opinion about failure probabilities, the approach emphasizes asking the analysts what experience and information they have that is relevant to the probability of failure. The facilitator then leads the group in combining the different kinds of information into a consensus probability distribution. This paper describes the expert elicitation process, presents its technical basis, and discusses the controls that are exercised to use it appropriately. The paper also points out the strengths and weaknesses of the approach and how it can be improved. Specifically, it describes how generalized contextually anchored probabilities (GCAPs) can be developed to serve as reference points for estimates of the likelihood of UAs and their distributions.


Reliability Engineering & System Safety | 2010

Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

Ronald Laurids Boring; Stacey Langfitt Hendrickson; John A. Forester; Tuan Q. Tran; Erasmia Lois

There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.


2007 IEEE 8th Human Factors and Power Plants and HPRCT 13th Annual Meeting | 2007

An empirical study of HRA methods - overall design and issues

Vinh N. Dang; Andreas Bye; Erasmia Lois; John A. Forester; Alan M. Kolaczkowski; Per Øivind Braarud

A diversity of Human Reliability Analysis (HRA) methods are currently available to treat human performance in Probabilistic Risk Assessments (PRAs). This range of methods reflects traditional concerns with human-machine interfaces and with the basic feasibility of actions in PRA scenarios as well as the more recent attention paid to Errors of Commission and decision- making performance. Given the differences in the scope of the methods and their underlying models, there is a substantial interest in assessing HRA methods and ultimately in validating the approaches and models underlying these methods. A significant step in this direction is an international evaluation study of HRA methods, based on comparing the observed performance in simulator experiments with the outcomes predicted in HRA analyses. Its aim is to develop an empirically- based understanding of the performance, strengths, and weaknesses of the methods. This paper presents the overall methodology for this initial assessment study.


2007 IEEE 8th Human Factors and Power Plants and HPRCT 13th Annual Meeting | 2007

Human reliability analysis (HRA) in the context of HRA testing with empirical data

John A. Forester; Alan M. Kolaczkowski; Vinh N. Dang; Erasmia Lois

Given the significant differences in the scope, approach, and underlying models of a relatively wide range of existing HRA methods, there has been a growing interest on the part of HRA method developers and users to empirically test the various methods. To this end, there is an ongoing international effort to begin this process by testing the application of HRA methods to nuclear power plant operating crew performance in the HAMMLAB simulators at the Halden Reactor Project in Norway. Initial efforts in designing and implementing these studies have identified a number of issues associated with structuring the studies in order to allow an adequate and appropriate test of the different methods. This paper focuses on issues associated with applying HRA methods in the context of an empirical study, particularly when a research simulator is used for data collection. Example issues include: determining the scope of the analysis when the methods themselves differ in the scope of the HRA processes they address, accounting for differences between the methods in how they use simulator exercises to support the analysis, addressing the impact of experimental controls on application of methods, and given the low probability of human failure events typically modelled in nuclear power plant probabilistic risk/safety assessments (PRAs/PSAs), the need for analysts to present their results in a somewhat different format than they usually do. These types of issues related to applying HRA methods in the context of empirical studies are discussed and resolutions are proposed.


Archive | 2009

An overview of the evolution of human reliability analysis in the context of probabilistic risk assessment.

Dennis C. Bley; Erasmia Lois; Alan M. Kolaczkowski; John A. Forester; John Wreathall; Co., Dublin, Oh; Susan E. Cooper

Since the Reactor Safety Study in the early 1970s, human reliability analysis (HRA) has been evolving towards a better ability to account for the factors and conditions that can lead humans to take unsafe actions and thereby provide better estimates of the likelihood of human error for probabilistic risk assessments (PRAs). The purpose of this paper is to provide an overview of recent reviews of operational events and advances in the behavioral sciences that have impacted the evolution of HRA methods and contributed to improvements. The paper discusses the importance of human errors in complex human-technical systems, examines why humans contribute to accidents and unsafe conditions, and discusses how lessons learned over the years have changed the perspective and approach for modeling human behavior in PRAs of complicated domains such as nuclear power plants. It is argued that it has become increasingly more important to understand and model the more cognitive aspects of human performance and to address the broader range of factors that have been shown to influence human performance in complex domains. The paper concludes by addressing the current ability of HRA to adequately predict human failure events and their likelihood.


Archive | 2010

International HRA Empirical Study, Overall Methodology and HAMMLAB Results

Salvatore Massaiu; Andreas Bye; Per Øivind Braarud; Helena Broberg; Michael Hildebrandt; Vinh N. Dang; Erasmia Lois; John A. Forester

The International HRA Empirical Study addresses the need for assessing HRA (Human Reliability Analysis) methods in light of human performance data. The study is based on a comparison of observed performance in HAMMLAB simulator trials with the outcomes predicted in HRA analyses. The project goal is to develop an empirically-based understanding of the performance, strengths, and weaknesses of a number of different HRA methods. This chapter presents the overall methodology for the initial assessment study (the pilot study), provides an overview of the HAMMLAB results and presents insights from the initial assessment.


Archive | 2013

Draft Function Allocation Framework and Preliminary Technical Basis for Advanced SMR Concepts of Operations

Jacques Hugo; John A. Forester; David I. Gertman; Jeffrey C. Joe; Heather Medema; Julius J. Persensky; April M. Whaley

This report presents preliminary research results from the investigation into the development of new models and guidance for Concepts of Operations in advanced small modular reactor (AdvSMR) designs. AdvSMRs are nuclear power plants (NPPs), but unlike conventional large NPPs that are constructed on site, AdvSMRs systems and components will be fabricated in a factory and then assembled on site. AdvSMRs will also use advanced digital instrumentation and control systems, and make greater use of automation. Some AdvSMR designs also propose to be operated in a multi-unit configuration with a single central control room as a way to be more cost-competitive with existing NPPs. These differences from conventional NPPs not only pose technical and operational challenges, but they will undoubtedly also have regulatory compliance implications, especially with respect to staffing requirements and safety standards.


Archive | 2004

Human Reliability Analysis (HRA) Good Practices

Alan M. Kolaczkowski; John A. Forester; Erasmia Lois; Gareth Parry; Dennis C. Bley

With the expectation that PRA will continue to be used in the commercial nuclear industry in assessing current operating risks, in estimating changes in risk as a result of temporary and permanent plant changes, and as an adjunct to the design process of newer generation plants, it is important that practitioners perform human reliability analysis (HRA) in accordance with good practices and that reviewers recognize the implementation of good practices (or failure to do so) in these analyses. Documents such as the American Society of Mechanical Engineers (ASME) Standard for Probabilistic Risk Assessment for Nuclear Power Plant Applications provide requirements for performing a good HRA. However, the requirements do not provide implementation guidance, i.e., they are not as “tutorial” as is necessary to support HRA analysts. This paper provides examples of “good practices” for a HRA as they should be implemented within a broader probabilistic risk assessment (PRA). It also discusses the consideration of “errors of commission” (EOCs) that is not explicitly addressed in the ASME Standard. The good HRA practice guidance focuses on identifying the attributes of a good HRA regardless of the specific methods or tools that are used and does not endorse the use of any particular method.


10th International Probabilistic Safety Assessment & Management Conference (PSAM10),Seattle, WA,06/07/2010,06/11/2010 | 2010

Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

Ronald L. Boring; John A. Forester; Andreas Bye; Vinh N. Dang; Erasmia Lois


Archive | 2012

HRA Method Analysis Criteria.

Stacey Langfitt Hendrickson; John A. Forester; Vinh N. Dang; Ali Mosleh; Erasmia Lois; Jing Xing

Collaboration


Dive into the John A. Forester's collaboration.

Top Co-Authors

Avatar

Erasmia Lois

Nuclear Regulatory Commission

View shared research outputs
Top Co-Authors

Avatar

Vinh N. Dang

Paul Scherrer Institute

View shared research outputs
Top Co-Authors

Avatar

Alan M. Kolaczkowski

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susan E. Cooper

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Andreas Bye

Organisation for Economic Co-operation and Development

View shared research outputs
Top Co-Authors

Avatar

Gareth Parry

Nuclear Regulatory Commission

View shared research outputs
Top Co-Authors

Avatar

Ali Mosleh

University of California

View shared research outputs
Top Co-Authors

Avatar

April M. Whaley

Idaho National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge