George Apostolakis
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George Apostolakis.
Reliability Engineering & System Safety | 1988
Ali Mosleh; Vicki M. Bier; George Apostolakis
Abstract This paper critically reviews and evaluates the elicitation and use of expert opinion in probabilistic risk assessment (PRA) of nuclear power plants in light of the available empirical and theoretical results on expert opinion use. PRA practice is represented by four case studies selected to represent a variety of aspects of the problem: 1. ⊗ Assessments of component failure rates and maintenance data. 2. ⊗ An assessment of seismic hazard rates. 3. ⊗ Assessments of containment phenomenology. 4. ⊗ Accident precursor studies. The review has yielded mixed results. On the negative side, there appears to be little reliance on normative expertise in structuring the process of expert opinion elicitation and use; most applications instead rely primarily on the common sense of the experts involved in the analysis, which is not always an adequate guide. On the positive side, however, there is evidence that expert opinions can, in fact, be used well in practical settings.
Reliability Engineering & System Safety | 1994
Keyvan Davoudian; Jya-Syin Wu; George Apostolakis
Abstract It is proposed that the impact of organizational factors on nuclear-power-plant safety can be determined by accounting for the dependence that these factors introduce among probabilistic safety assessment parameters. The work process analysis model (WPAM) is presented as an analytical tool, in which these dependencies are investigated via work processes. In this paper, WPAM is applied to pre-accident conditions within the framework of the maintenance work process.
Reliability Engineering & System Safety | 1994
Keyvan Davoudian; Jya-Syin Wu; George Apostolakis
Abstract The work process analysis model-I (WPAM-I) along with its products developed in a previous paper (Davoudian, K., Wu, J.-S. & Apostolakis, G., Reliability Engineering and System Safety , 45 (1994) 85–105 are used as inputs to WPAM-II. The goal is to provide the link between organizational factors (or dimensions), work processes, and probabilistic safety assessment parameters in order to facilitate the quantification of the impact of organizational factors on plant safety. This is achieved by calculating new (organizationally dependent) probabilities for minimal cut sets so that each new probability contains in it, either explicitly or implicitly, the effect of the pertinent organizational factors. A sample case is presented demonstrating the application of WPAM to a specific minimal cut set. Finally, sensitivity analyses are performed in order to explore the effectiveness of organizational improvements as a risk management strategy.
Reliability Engineering | 1987
George Apostolakis; Parviz Moieni
Two kinds of dependence are distinguished: stochastic and state-of-knowledge dependence. Models of stochastic dependence include common cause failures and deal with component failures. They are conditioned on a set of parameters whose ranges of values and their correlations are assessed in the state-of-knowledge models. It is argued that the parametric stochastic models represent the class of failure causes that are not explicitly modeled. Three such stochastic models are examined: the Basic Parameter (BP) model, the Multiple Greek Letter (MGL) model, and the Multinomial Failure Rate (MFR) model. Two problems of the MGL model are discussed. The first has to do with the definition of the parameters. It is shown that β, γ, etc., of the MGL model are defined with reference to a specific component and are used improperly in the statistical calculations. The second problem stems from the fact that the MGL parameters are defined in terms of component failures rather than the events that cause their failures. This results in an artificial increase of the strength of the statistical evidence. The multivariate Dirichlet distribution is used as the state-of-knowledge distribution in the MFR model, since it can model the correlations between the parameters and is a conjuga distribution with respect to the multinomial distribution, thus facilitating Bayesian updating. The Dirichlet distribution can also be used with the BP model to represent the analysts state of knowledge concerning the numerical values of the parameters of this model.
systems man and cybernetics | 1995
Christopher James Garrett; Sergio B. Guarro; George Apostolakis
The dynamic flowgraph methodology (DFM) is an integrated methodological approach to modeling and analyzing the behavior of software-driven embedded systems for the purpose of reliability/safety assessment and verification. The methodology has two fundamental goals: (1) to identify how certain postulated events may occur in a system; and (2) to identify an appropriate testing strategy based on an analysis of system functional behavior. To achieve these goals, the methodology employs a modeling framework in which system models are developed in terms of causal relationships between physical variables and temporal characteristics of the execution of software modules. These models are then analyzed to determine how a certain state (desirable or undesirable) can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating system parameters at different points in time. The prime implicants (multi-state analogue of minimal cut sets) of the fault trees can be used to identify and eliminate system faults resulting from unanticipated combinations of software logic errors, hardware failures and adverse environmental conditions, and to direct testing activity to more efficiently eliminate implementation errors by focusing on the neighborhood of potential failure modes arising from these combinations of system conditions. >
Nuclear Engineering and Design | 1982
George Apostolakis
Abstract Several issues pertaining to data analysis in risk assessments are investigated. The derivation and interpretation of generic distributions are detailed. It is suggested that the generic distributions in the literature may be narrower than it can be justified. Methods for analyzing plant-specific data are summarized. Finally, the uncertainties in human error rates are discussed.
Nuclear Engineering and Design | 1976
George Apostolakis
Abstract This is a theoretical investigation of the importance of common mode failures (cmfs) on the reliability of redundant systems. These failures are assumed to be the result of fatal shocks, e.g. from earthquakes and explosions, which occur at a constant rate. This formulation makes it possible to predict analytically results obtained in the past which showed that the probability of a cmf of the redundant channels of the protection system of a typical nuclear power plant was orders of magnitude larger than the probability of failure from chance failures alone. Furthermore, since most reliability analyses of redundant systems do not include potential cmfs in the probabilistic calculations, criteria are established which can be used to decide either that the cmf effects are indeed insignificant or that such calculations are meaningless, and more sophisticated methods of analysis are required, because cmfs cannot be ignored.
Reliability Engineering & System Safety | 1990
Jya-Syin Wu; George Apostolakis; D. Okrent
Abstract The theory of evidence and the theory of possibility have been suggested as possible alternatives to probability theory in safety analyses of engineering systems. This paper discusses three issues: (1) how formal probability theory has been relaxed to develop these nonprobabilistic models; (2) how degrees of belief are expressed in probabilistic and nonprobabilistic theories; and (3) the degree to which these nonprobabilistic models can be applied to system analysis in terms of their capability to combine knowledge.
Reliability Engineering & System Safety | 1992
Sumeet Chhibber; George Apostolakis; D. Okrent
Abstract Expert judgments are frequently used in probabilistic safety assessments (PSA). However, the methods employed in practice are very crude and a large gap exists between the theoretical methods available and actual practice. A taxonomy of issues related to the use of expert judgments in PSA was considered necessary to identify the needs of the practitioners and the applicability of existing models. In this paper, a taxonomy of issues related to the use of expert judgments in PSA is systematically reviewed with examples from case studies. Issues surrounding the expert judgement procces can be classified into two categories—(a) elicitation, and (b) the use of expert judgments. Various elements of these categories, such as model and parameter uncertainty, decomposition, the use of multiple experts, the selection of experts, expert training, elicitation, effect of information provided to experts, expert calibration, availability of evidence, opinion aggregation and dependence are then discussed. The issues of expert bias, calibration and dependence are of special concern. Sources of expert bias and dependence are discussed with some thoughts on overcoming them using examples from selected case studies.
Reliability Engineering & System Safety | 1988
George Apostolakis; Vicki M. Bier; Ali Mosleh
Abstract This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is presented by the success likelihood index methodology-multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent.