Tunc Aldemir
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tunc Aldemir.
IEEE Transactions on Reliability | 1987
Tunc Aldemir
Process control systems (PCS) are systems with control loops and continuous state dynamic variables such as pressure, temperature, and liquid level. Existing computer-assisted failure modeling schemes for PCS are based on a static description of system operation (eg, by digraphs, signal-flow-based graphs). This paper presents a dynamic approach to the failure modeling of PCS. The givens for the methodology are: 1) a set of first order differential equations with feedback describing the interaction between system variables, 2) failure and repair rates for the control units constituting the PCS. The methodology is based on the discrete state space-discrete time representation of PCS dynamics. Probabilistic system behavior is simulated by a Markov chain. An algorithm is developed for the mechanized construction of the transition matrix. Input preparation for the algorithm is illustrated by examples. Useful features of the methodology are: 1) failure model accuracy can be verified or improved by a change in the input data for mechanized model construction, 2) effect of changes in system parameters on PCS failure characteristics can be quantified. These features are demonstrated on a simple level-control system. The limitations of the methodology are discussed.
Nuclear Technology | 1996
Mohammad Harunuzzaman; Tunc Aldemir
A methodology and a computational scheme are developed based on dynamic programming (DP) to find the minimum cost maintenance schedule for nuclear power plant standby safety systems. Surveillance and testing are assumed to return the component to as-good-as-new condition whether accompanied by restorative maintenance only or full repair or replacement. The methodology defines component state as the number of unsurveilled and untested maintenance intervals or stages, and the optimization process is decomposed into (a) feasibility screening and (b) DP search. This approach achieves a significant reduction in the state space over which the DP search is to be performed. The application of the scheme is demonstrated on the ten-component high-pressure injection system of a pressurized water reactor. This demonstration indicates that the scheme is viable and efficient and particularly suited to exploit any economies of scale and scope that may be present.
Reliability Engineering & System Safety | 2008
Paolo Bucci; Jason Kirschenbaum; L. Anthony Mangan; Tunc Aldemir; Curtis Smith; Ted Wood
While the event-tree (ET)/fault-tree (FT) methodology is the most popular approach to probability risk assessment (PRA), concerns have been raised in the literature regarding its potential limitations in the reliability modeling of dynamic systems. Markov reliability models have the ability to capture the statistical dependencies between failure events that can arise in complex dynamic systems. A methodology is presented that combines Markov modeling with the cell-to-cell mapping technique (CCMT) to construct dynamic ETs/FTs and addresses the concerns with the traditional ET/FT methodology. The approach is demonstrated using a simple water level control system. It is also shown how the generated ETs/FTs can be incorporated into an existing PRA so that only the (sub)systems requiring dynamic methods need to be analyzed using this approach while still leveraging the static model of the rest of the system.
Reliability Engineering & System Safety | 2010
Tunc Aldemir; Sergio Guarro; Diego Mandelli; Jason Kirschenbaum; L. A. Mangan; Paolo Bucci; Michael Yau; Eylem Ekici; Don W. Miller; Xiaodong Sun; S.A. Arndt
The Markov/cell-to-cell mapping technique (CCMT) and the dynamic flowgraph methodology (DFM) are two system logic modeling methodologies that have been proposed to address the dynamic characteristics of digital instrumentation and control (I&C) systems and provide risk-analytical capabilities that supplement those provided by traditional probabilistic risk assessment (PRA) techniques for nuclear power plants. Both methodologies utilize a discrete state, multi-valued logic representation of the digital I&C system. For probabilistic quantification purposes, both techniques require the estimation of the probabilities of basic system failure modes, including digital I&C software failure modes, that appear in the prime implicants identified as contributors to a given system event of interest. As in any other system modeling process, the accuracy and predictive value of the models produced by the two techniques, depend not only on the intrinsic features of the modeling paradigm, but also and to a considerable extent on information and knowledge available to the analyst, concerning the system behavior and operation rules under normal and off-nominal conditions, and the associated controlled/monitored process dynamics. The application of the two methodologies is illustrated using a digital feedwater control system (DFWCS) similar to that of an operating pressurized water reactor. This application was carried out to demonstrate how the use of either technique, or both, can facilitate the updating of an existing nuclear power plant PRA model following an upgrade of the instrumentation and control system from analog to digital. Because of scope limitations, the focus of the demonstration of the methodologies was intentionally limited to aspects of digital I&C system behavior for which probabilistic data was on hand or could be generated within the existing project bounds of time and resources. The data used in the probabilistic quantification portion of the process were gathered partially from fault injection experiments with the DFWCS, separately conducted under conservative assumptions, partially from operating experience, and partially from generic data bases. The purpose of the quantification portion of the process was, purely to demonstrate the PRA-updating use and application of the methodologies, without making any particular claim regarding the specific validity and predictive value of the data utilized to illustrate the quantitative risk calculations produced from the qualitative information analytically generated by the models. A comparison of the results obtained from the Markov/CCMT and DFM regarding the event sequences leading to DFWCS failure modes show qualitative and quantitative consistency for the risk scenarios and sequences under consideration. The study also shows that: (a) the risk significance of the timing of system component failures may depend on factors that include the actual variability of initiating conditions of a dynamic transient, even within the nominal control range and (b) the range of dynamic outcomes may also be dependent on the choice of the assumed basic system-component failure modes included in the models, regardless of whether some of these would or would not be considered to have direct safety implications according to the traditional safety/non-safety equipment classifications.
Reliability Engineering & System Safety | 2010
Benjamin Rutt; Kyle Metzroth; Aram Hakobyan; Tunc Aldemir; Richard Denning; Sean Dunagan; David Kunsman
Analysis of dynamic accident progression trees (ADAPT) is a mechanized procedure for the generation of accident progression event trees. Use of ADAPT substantially reduces the manual and computational effort for Level 2 probabilistic risk assessment (PRA) of nuclear power plants; reduces the likelihood of input errors; determines the order of events dynamically; and treats accidents in a phenomenology consistent manner. ADAPT is based on the concept of dynamic event trees which use explicit modeling of the deterministic dynamic processes that take place within the plant (through system simulation codes such as MELCOR, RELAP) for the modeling of stochastic system evolution. The computational infrastructure of ADAPT is presented, along with a prototype implementation of ADAPT using MELCOR for the PRA modeling of a station blackout in a pressurized water reactor. The computational infrastructure allows for flexibility in linking with different simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination) and user-friendly graphical capabilities.
Reliability Engineering & System Safety | 1990
Mahbubul Hassan; Tunc Aldemir
Abstract Dynamic methodologies are defined as those which explicitly account for the time element in system operation for failure modeling. A dynamic methodology is presented for the accurate failure analysis of closed loop control systems (CLCS) in process plants, using: (a) a data base to describe the CLCS process physics, and (b) a Markov model to describe the probabilistic evolution of the controlled variables in discrete time and discretized controlled variable state space. Both the preparation of the data base and construction of the Markov model are mechanized. The methodology is advantageous if: (a) there is an existing computer code for the fast simulation of CLCS behavior under both normal and failed component operation, (b) the failure characteristics of the CLCS for given initial conditions and over a specified time interval are of interest, and (c) several analyses need to be performed in view of uncertainties in the component failure data. The methodology is illustrated on a CLCS for maintaining the reactor vessel pressure and water levels in Boiling Water Reactors within allowed ranges following a small-break loss of coolant accident (LOCA). The results for the probability of occurrence of the Top Events (e.g. high pressure, low level) during a 60 min interval following the LOCA are compared to those predicted by the fault-tree approach using digraphs.
IEEE Transactions on Automatic Control | 1999
Laurian Dinca; Tunc Aldemir; Giorgio Rizzoni
A model based parameter and state estimation technique is presented toward fault diagnosis in dynamic systems. The methodology is based on the representation of the system dynamics in terms of transition probabilities between user-specified sets of magnitude intervals of system parameters and state variables during user-specified time intervals. These intervals may reflect noise in the monitored data, random changes in the parameters, or modeling uncertainties in general. The transition probabilities are obtained from a given system model that yields the current values of the state variables in discrete time from their values at the previous time step and the values of the system parameters at the previous time step. Implementation of the methodology on a simplified model of the air, inertial, fuel, and exhaust dynamics of the powertrain of a vehicle shows that the methodology is capable of estimating the system parameters and tracking the unmonitored dynamic variables within the user-specified magnitude intervals.
Reliability Engineering & System Safety | 2013
Diego Mandelli; Alper Yilmaz; Tunc Aldemir; Kyle Metzroth; Richard Denning
A challenging aspect of dynamic methodologies for probabilistic risk assessment (PRA), such as the Dynamic Event Tree (DET) methodology, is the large number of scenarios generated for a single initiating event. Such large amounts of information can be difficult to organize for extracting useful information. Furthermore, it is not often sufficient to merely calculate a quantitative value for the risk and its associated uncertainties. The development of risk insights that can increase system safety and improve system performance requires the interpretation of scenario evolutions and the principal characteristics of the events that contribute to the risk. For a given scenario dataset, it can be useful to identify the scenarios that have similar behaviors (i.e., identify the most evident classes), and decide for each event sequence, to which class it belongs (i.e., classification). It is shown how it is possible to accomplish these two objectives using the Mean-Shift Methodology (MSM). The MSM is a kernel-based, non-parametric density estimation technique that is used to find the modes of an unknown data distribution. The algorithm developed finds the modes of the data distribution in the state space corresponding to regions with highest data density as well as grouping the scenarios generated into clusters based on scenario temporal similarities. The MSM is illustrated using the data generated by a DET algorithm for the analysis of a simple level/temperature controller and reactor vessel auxiliary cooling system.
Nuclear Technology | 1995
Rob D. Radulovich; W.E. Vesely; Tunc Aldemir
In the nuclear industry, aging effects have been traditionally incorporated into probabilistic risk assessment studies by using a constant (static) unavailability (q s ) averaged over time. However, recent work shows that because of aging, substantial deviations may occur in time-dependent nuclear plant component unavailability from that predicted by static models well within the plant lifetime. A methodology based on the standard extension of the classic renewal equation when repair is explicitly considered is used to investigate (a) the trends in the effects of aging on time-dependent component unavailability as a function of changing first failure density (FFD) and test parameters and (b) the circumstances for which static approximations may be inadequate to describe these effects. The investigation uses several point- and time-averaged unavailability measures based on time-dependent unavailability, such as before-test unavailability (BTU), average-interval unavailability (AIU) and year-average unavailability (YA U), and is restricted to periodically tested components whose FFDs satisfy the Weibull distribution with aging threshold. The results show that while point measures (e.g., BTU) can substantially differ from static unavailability and while all measures are sensitive to changes in the Weibull shape parameter b, aging threshold time τ, and time between tests T, the differences between the time-averaged measures used (e.g., AIU, YA U) and the static unavailability were only found to be relatively significant for one case among more than 100 combinations of b, τ, and T that were investigated. The differences area factor of 18 months) and may describe the late effects ofaging on component unavailability irrespective of b and T (i. e., beyond 25 yr of component age for the data under consideration).
Nuclear Science and Engineering | 2004
Peng Wang; Tunc Aldemir
Abstract The cell-to-cell-mapping technique (CCMT) models system evolution in terms of probability of transitions within a user-specified time interval (e.g., data-sampling interval) between sets of user-defined parameter/state variable magnitude intervals (cells). The cell-to-cell transition probabilities are obtained from the given linear or nonlinear plant model. In conjunction with monitored data and the plant model, the Dynamic System Doctor (DSD) software package uses the CCMT to determine the probability of finding the unmonitored parameter/state variables in a given cell at a given time recursively from a Markov chain. The most important feature of the methodology with regard to model-based fault diagnosis is that it can automatically account for uncertainties in the monitored system state, inputs, and modeling uncertainties through the appropriate choice of the cells, as well as providing a probabilistic measure to rank the likelihood of faults in view of these uncertainties. Such a ranking is particularly important for risk-informed regulation and risk monitoring of nuclear power plants. The DSD estimation algorithm is based on the assumptions that (a) the measurement noise is uniformly distributed and (b) the measured variables are part of the state variable vector. A new theoretical basis is presented for CCMT-based state/parameter estimation that waives these assumptions using a Bayesian interpretation of the approach and expands the applicability range of DSD, as well as providing a link to the conventional state/parameter estimation schemes. The resulting improvements are illustrated using a point reactor xenon evolution model in the presence of thermal feedback and compared to the previous DSD algorithm. The results of the study show that the new theoretical basis (a) increases the applicability of methodology to arbitrary observers and arbitrary noise distributions in the monitored data, as well as to arbitrary uncertainties in the model parameters; (b) leads to improvements in the estimation speed and accuracy; and (c) allows the estimator to be used for noise reduction in the monitored data. The connection between DSD and conventional state/parameter estimation schemes is shown and illustrated for the least-squares estimator, maximum likelihood estimator, and Kalman filter using a recently proposed scheme for directly measuring local power density in nuclear reactor cores.