George E. Apostolakis
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George E. Apostolakis.
Other Information: PBD: Apr 1997 | 1997
Robert J. Budnitz; George E. Apostolakis; David M. Boore
Probabilistic Seismic Hazard Analysis (PSHA) is a methodology that estimates the likelihood that various levels of earthquake-caused ground motion will be exceeded at a given location in a given future time period. Due to large uncertainties in all the geosciences data and in their modeling, multiple model interpretations are often possible. This leads to disagreement among experts, which in the past has led to disagreement on the selection of ground motion for design at a given site. In order to review the present state-of-the-art and improve on the overall stability of the PSHA process, the U.S. Nuclear Regulatory Commission (NRC), the U.S. Department of Energy (DOE), and the Electric Power Research Institute (EPRI) co-sponsored a project to provide methodological guidance on how to perform a PSHA. The project has been carried out by a seven-member Senior Seismic Hazard Analysis Committee (SSHAC) supported by a large number other experts. The SSHAC reviewed past studies, including the Lawrence Livermore National Laboratory and the EPRI landmark PSHA studies of the 1980`s and examined ways to improve on the present state-of-the-art. The Committee`s most important conclusion is that differences in PSHA results are due to procedural rather than technical differences. Thus, in addition to providing a detailed documentation on state-of-the-art elements of a PSHA, this report provides a series of procedural recommendations. The role of experts is analyzed in detail. Two entities are formally defined-the Technical Integrator (TI) and the Technical Facilitator Integrator (TFI)--to account for the various levels of complexity in the technical issues and different levels of efforts needed in a given study.
Risk Analysis | 2005
George E. Apostolakis; Douglas M. Lemon
The extreme importance of critical infrastructures to modern society is widely recognized. These infrastructures are complex and interdependent. Protecting the critical infrastructures from terrorism presents an enormous challenge. Recognizing that society cannot afford the costs associated with absolute protection, it is necessary to identify and prioritize the vulnerabilities in these infrastructures. This article presents a methodology for the identification and prioritization of vulnerabilities in infrastructures. We model the infrastructures as interconnected digraphs and employ graph theory to identify the candidate vulnerable scenarios. These scenarios are screened for the susceptibility of their elements to a terrorist attack, and a prioritized list of vulnerabilities is produced. The prioritization methodology is based on multiattribute utility theory. The impact of losing infrastructure services is evaluated using a value tree that reflects the perceptions and values of the decisionmaker and the relevant stakeholders. These results, which are conditional on a specified threat, are provided to the decisionmaker for use in risk management. The methodology is illustrated through the presentation of a portion of the analysis conducted on the campus of the Massachusetts Institute of Technology.
Reliability Engineering & System Safety | 2001
Emanuele Borgonovo; George E. Apostolakis
Abstract In this paper, we introduce a new importance measure, the differential importance measure (DIM), for probabilistic safety assessment (PSA). DIM responds to the need of the analyst/decision maker to get information about the importance of proposed changes that affect component properties and multiple basic events. DIM is directly applicable to both the basic events and the parameters of the PSA model. Unlike the Fussell–Vesely (FV), risk achievement worth (RAW), Birnbaum, and criticality importance measures, DIM is additive, i.e. the DIM of groups of basic events or parameters is the sum of the individual DIMs. We discuss the difference between DIM and other local sensitivity measures that are based on normalized partial derivatives. An example is used to demonstrate the evaluation of DIM at both the basic event and the parameter level. To compare the results obtained with DIM at the parameter level, an extension of the definitions of FV and RAW is necessary. We discuss possible extensions and compare the results of the three measures for a more realistic example.
Reliability Engineering & System Safety | 1996
Enrico Zio; George E. Apostolakis
The assessment of the performance of high-level radioactive waste repositories is based on the use of models for predicting system behaviour. The complexity of the system together with the large spatial and temporal scales imposed by the regulations introduce large uncertainties in the analysis. The difficulty of validating the relevant models creates the need of assessing their validity by means of expert judgments. This paper addresses the problem of model uncertainty both from a theoretical and a practical point of view and presents two mathematical approaches to the treatment of model uncertainty that can assist the experts in the formulation of their judgments. The formal elicitation of expert judgments is investigated within the Technical Facilitator/Integrator (TFI) framework that has been proposed by the Senior Seismic Hazard Analysis Committee. Within this framework, the mathematical formulations for the treatment of model uncertainty are regarded as tools for sensitivity analyses that give insights into the model characteristics and are helpful in structuring the expert opinion elicitation process itself. The first approach, referred to as the alternate-hypotheses formulation, amounts to constructing a suitable set of plausible hypotheses and evaluating their validity. The second approach to model uncertainty is referred to as the adjustment-factor formulation and it requires that a reference model be identified and its predictions be directly modified through an adjustment factor that accounts for the uncertainty in the models. Furthermore, both approaches require a clear understanding of the distinction between aleatory and epistemic uncertainties. The implications that these two formulations have on, and the issues that they raise in, the elicitation of expert opinions are explored. A case study of model uncertainty regarding alternative models for the description of groundwater flow and contaminant transport in unsaturated, fractured tuff is presented.
Reliability Engineering & System Safety | 2007
S. A. Patterson; George E. Apostolakis
This paper presents a possible approach to ranking geographic regions that can influence multiple infrastructures. Once ranked, decision makers can determine whether these regions are critical locations based on their susceptibility to terrorist acts. We identify these locations by calculating a value for a geographic region that represents the combined values to the decision makers of all the infrastructures crossing through that region. These values, as well as the size of the geographic region, are conditional on an assumed destructive threat of a given size. In our case study, the threat is assumed to be minor, e.g., a bomb that can affect objects within 7m of it.
Nuclear Technology | 2005
Lorenzo P. Pagani; George E. Apostolakis; Pavel Hejzlar
Passive safety systems are commonly considered to be more reliable than active systems. The lack of mechanical moving parts or other active components drastically reduces the probabilities of hardware failure. For passive systems, it is necessary to introduce the concept of functional failure, i.e., the possibility that the loads will exceed the capacity in a reliability physics framework. In this paper we analyze the passive cooling of a gas-cooled fast reactor, and we use an importance-sampling Monte Carlo technique to propagate the epistemic uncertainties and to calculate the probabilities of functional failures. The results show that functional failures are an important contributor to the overall failure probability of the system and, therefore, should be included in probabilistic risk assessments. A comparison with an alternative active design is considered also. The results show that the active system can have, for this particular application, better reliability than the passive one.
Reliability Engineering & System Safety | 2003
Emanuele Borgonovo; George E. Apostolakis; Stefano Tarantola; Andrea Saltelli
This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell ‐ Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA. q 2002 Elsevier Science Ltd. All rights reserved.
Risk Analysis | 1998
George E. Apostolakis; Susan E. Pickett
The National Research Council has recommended the use of an analytic/deliberative decision-making process in environmental restoration decisions that involve multiple stakeholders. This work investigates the use of the results of risk assessment and multiattribute utility analysis (the “analysis”) in guiding the deliberation. These results include the ranking of proposed remedial action alternatives according to each stakeholders preferences, as well as the identification of the major reasons for these rankings. The stakeholder preferences are over a number of performance measures that include the traditional risk assessment metrics, e.g., individual worker risk, as well as programmatic, cultural, and cost-related impacts. Based on these results, a number of proposals are prepared for consideration by the stakeholders during the deliberation. These proposals are the starting point for the formulation of actual recommendations by the group. In our case study, these recommendations included new remedial action alternatives that were created by the stakeholders after an extensive discussion of the detailed analytical results.
Reliability Engineering & System Safety | 2002
Christopher James Garrett; George E. Apostolakis
Abstract Digital instrumentation and control (I&C) systems can provide important benefits in many safety-critical applications, but they can also introduce potential new failure modes that can affect safety. Unlike electro-mechanical systems, whose failure modes are fairly well understood and which can often be built to fail in a particular way, software errors are very unpredictable. There is virtually no nontrivial software that will function as expected under all conditions. Consequently, there is a great deal of concern about whether there is a sufficient basis on which to resolve questions about safety. In this paper, an approach for validating the safety requirements of digital I&C systems is developed which uses the Dynamic Flowgraph Methodology to conduct automated hazard analyses. The prime implicants of these analyses can be used to identify unknown system hazards, prioritize the disposition of known system hazards, and guide lower-level design decisions to either eliminate or mitigate known hazards. In a case study involving a space-based reactor control system, the method succeeded in identifying an unknown failure mechanism.
Risk Analysis | 2009
Haiyan Li; George E. Apostolakis; Joseph Gifun; William Vanschalkwyk; Susan Leite; David Barber
Natural hazards, human-induced accidents, and malicious acts have caused great losses and disruptions to society. After September 11, 2001, critical infrastructure protection has become a national focus in the United States and is likely to remain one for the foreseeable future. Damage to the infrastructures and assets could be mitigated through predisaster planning and actions. A systematic methodology was developed to assess and rank the risks from these multiple hazards in a community of 20,000 people. It is an interdisciplinary study that includes probabilistic risk assessment (PRA), decision analysis, and expert judgment. Scenarios are constructed to show how the initiating events evolve into undesirable consequences. A value tree, based on multi-attribute utility theory (MAUT), is used to capture the decisionmakers preferences about the impacts on the infrastructures and other assets. The risks from random failures are ranked according to their expected performance index (PI), which is the product of frequency, probabilities, and consequences of a scenario. Risks from malicious acts are ranked according to their PI as the frequency of attack is not available. A deliberative process is used to capture the factors that could not be addressed in the analysis and to scrutinize the results. This methodology provides a framework for the development of a risk-informed decision strategy. Although this study uses the Massachusetts Institute of Technology campus as a case study of a real project, it is a general methodology that could be used by other similar communities and municipalities.