P.F. Frutuoso e Melo
Federal University of Rio de Janeiro
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by P.F. Frutuoso e Melo.
Nuclear Engineering and Design | 2001
Pedro L.C. Saldanha; Elaine A. de Simone; P.F. Frutuoso e Melo
The purpose of this paper is to present an application of the non-homogeneous Poisson point process to the study of the rates of occurrence of failures (ROCOF) when they are time dependent, and the times between failures are neither independent nor identically distributed. The application concerns the analysis of the reliability of service water pumps of a typical PWR nuclear power plant. These pumps are repairable components, in general. Standard statistical techniques, such as the maximum likelihood parameter estimation and linear regression analysis are applied. As a conclusion, the non-homogeneous Poisson process is an adequate tool for analyzing time-dependent ROCOFs for repairable component failure data. It can be used for surveying aging mechanisms during the operation life of repairable systems and also for the assessment of maintenance policies effectiveness.
International Journal of Intelligent Systems | 2002
Celso Marcelo Franklin Lapa; Cláudio M.N.A. Pereira; P.F. Frutuoso e Melo
Nuclear power plant systems are comprised of both on‐line and standby components. Standby components differ from on‐line ones, as they might be unavailable due to unrevealed failures. The usual procedure employed to reveal failures before real demands is to submit the component to surveillance tests. Surveillance test policies might deal with two conflicting scenarios: the test frequency must be sufficiently high in order to reveal failures before demands, but, on the other hand, it must be low enough due to its influence on the component unavailability.
Reliability Engineering & System Safety | 2005
A. Gromann de Araujo Góes; M.A.B. Alvarenga; P.F. Frutuoso e Melo
We have developed and implemented a computerized reliability monitoring system for nuclear power plant applications, based on a neural network. The developed computer program is a new tool related to operator decision support systems, in case of component failures, for the determination of test and maintenance policies during normal operation or to follow an incident sequence in a nuclear power plant. The NAROAS (Neural Network Advanced Reliability Advisory System) computer system has been developed as a modularized integrated system in a C++ Builder environment, using a Hopfield neural network instead of fault trees, to follow and control the different system configurations, for interventions as quickly as possible at the plant. The observed results are comparable and similar to those of other computer system results. As shown, the application of this neural network contributes to the state of the art of risk monitoring systems by turning it easier to perform online reliability calculations in the context of probabilistic safety assessments of nuclear power plants.
Archive | 2002
Celso Marcelo Franklin Lapa; Cláudio Márcio do Nascimento Abreu Pereira; P.F. Frutuoso e Melo
Nuclear power plant systems are comprised of both online and cold-standby components. Cold-standby components differ from the online ones, as they may be unavailable due to unrevealed failures. The usual procedure employed to reveal failures is to submit the components to surveillance tests. The surveillance tests policy may deal with two conflicting scenarios: the test frequency must be sufficiently high in order to minimize the occurrence of failures but, on the other hand, it must be low enough due to its influence on the component unavailability. Obtaining an optimum surveillance test policy at the system level, which involves consideration of many interdependent components, is a very difficult optimization problem. Hence, in this work we propose the use of genetic algorithms for searching the optimum surveillance tests policy, due to its robustness and efficiency in complex problem solving. Here, the probabilistic model considers: a) wear-out effects on cold-standby components when they undergo surveillance tests; b) that when a failure is revealed during a surveillance test, corrective maintenance is performed and, c) that components are distinct (that is, each has distinct test parameters, such as: outage time, wear-out factors, etc.); d) that tests are not necessarily periodic.
Annals of Nuclear Energy | 1998
P.F. Frutuoso e Melo; Antonio Carlos Marques Alvim; Fernando Carvalho da Silva
Abstract In this paper we discuss the application of the GPT methodology to a reliability engineering problem which is of great practical interest: that of the analysis on the influence of the demand rate of the accident rate of a process plant equipped with a single protective channel. This problem has been solved in the literature by traditional methods; that is, for each demand rate value the system of differential equations which governs the system behavior (derived from a Markovian reliability model) is solved, and the resulting points are employed to generate the desired curve. This sensitivity analysis has been performed in this paper by means of a GPT approach in order to show how it could simplify the calculations. Sensitivity studies were performed on the repair efficiency and the demand rate, then on the repair rate and repair efficiency and, finally, on the demand rate and repair rate. GPT results of third order approximations were well in agreement with direct calculations. The relevance of the GPT approach is discussed in the context of redundant protective channels.
Nuclear Engineering and Design | 1985
L.F. Oliveira; P.F. Frutuoso e Melo; Jaime E.P. Lima; Israel L. Stal
Abstract We discuss in this paper a computational application of the explicit method for analyzing event trees in the context of probabilistic risk assessments. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. This code constructs and quantifies the event trees in the fashion just discussed, by receiving as input the construction and quantification dependencies defined in the dependency matrix. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classifying them in core-degradation classes as specified by the user. This calculation is made in a pointwise fashion. Extensions of this code are being developed in order to perform uncertainty analyses on the dominant sequences and also risk importance measures of the safety systems envolved.
Nuclear Technology | 2016
É.C. Gomes; Juliana P. Duarte; P.F. Frutuoso e Melo
Abstract The purpose of this paper is to highlight and model the most important steps in cases of human failure in radiotherapy (teletherapy and brachytherapy) procedures by identifying possible modes of human failure. An approach via Bayesian networks (BNs) to model and highlight the most relevant steps of teletherapy and brachytherapy was used. Finally, as a technique for the quantification of BNs, an expert opinion elicitation procedure was used since no database is available. In the case of teletherapy, observing only the stages of prescription, planning, and execution, it appears that the step that most increases the success probability, after consideration of preventive measures, is execution. This is in agreement with cases of errors and accidents reported in the literature, considering that more than 50% of these cases are related to the implementation phase. Related to brachytherapy, the most relevant factor was the use of equipment, whose increase in success probability after consideration of preventive measures was 17.2%, demonstrating the importance of a continuous specific training. It is important to mention that the purpose of this study was not to calculate the risk associated with radiotherapy treatments but rather to check how accident prevention influences the success procedure and observe the relationship among all stages. An uncertainty analysis was performed of the expert data by considering that data scattering followed a normal or a lognormal distribution, due to data ranges considered. This analysis revealed that data scattering was better represented by normal distributions, and the results are consistent with pointwise estimates initially made.
Nuclear Technology | 2014
J. M. O. Pinto; P.F. Frutuoso e Melo; Pedro L.C. Saldanha
Abstract A methodology comprising Dynamic Flowgraph Methodology (DFM) and A Technique for Human Error Analysis (ATHEANA) is applied to a digital control system proposed for the pressurizer of current pressurized water reactor plants. The methodology consists of modeling this control system and its interactions with the controlled process and operator through an integrated DFM/ATHEANA approach. The results were complemented by the opinions of experts in conjunction with fuzzy theory. In terms of human reliability, DFM, along with ATHEANA, can model equipment failure modes, operator errors (omission/commission), and human factors that, combined with plant conditions, influence human performance. The results show that the methodology provides an efficient fault analysis of digital systems identifying all possible interactions among components. Through prime implicants, the methodology shows the event combinations that lead to system failure. Quantitative results obtained are in agreement with literature data, with a few percentage value discrepancies.
Progress in Nuclear Energy | 1999
P.F. Frutuoso e Melo; Antonio Carlos Marques Alvim; H.C. Noriega; M.E. Nunes; Eduardo A. Oliveira; Elaine A. de Simone; Pedro L.C. Saldanha
This paper presents and discusses three probability models for approaching aging of a component. NRC and IAEA views of aging phenomena are reviewed and discussed. Repair is approached in the first model by stochastic point processes, and a statistical dynamical model is employed to allow for Bayesian forecasting. A discussion on repairable systems terminology is presented because much controversy may be found concerning this feature. A queueing model is then presented for a component considering its age as a supplementary variable. Presented also are numerical and exact solutions (for some cases). Aging is modeled by Weibull and lognormal distributions for failure times. Repair is approached by exponential distributions. The device of stages is applied to the same problem, and results are obtained for a Weibull failure time distribution. Typical means and variances for the times to failure are considered, and combinations of stages are checked. Alternative solutions by failure rates discretization are generated to check the validity of the developed models. An important issue concerning these models is discussed, that of appropriate failure data for each of them.
Nuclear Technology | 2013
Laís Alencar de Aguiar; P.F. Frutuoso e Melo; Antonio Carlos Marques Alvim
This paper aims to determine, for the period of institutional control (300 yr), the probability of occurrence of the net release scenario of radioactive waste from a near-surface repository. The radioactive waste focused on in this work is that of low and medium activity generated by a pressurized water reactor plant. The repository is divided into eight modules, each of which consists of six barriers (top cover, upper layer, packages, base, walls, and geosphere). The repository is a system where the modules work in series and the module barriers work in active parallel. The module failure probability for radioactive elements is obtained from a Markov model because of shared loads assumed for the different barriers. Lack of field failure data led to the necessity of performing sensitivity analyses to assess the failure rate impact on module and barrier failure probabilities. Module failure probabilities have been found to be lower for those radioactive elements with higher retardation coefficients. The geosphere mean time to failure is the most important parameter for calculating module failure probabilities for each radioactive element. The repository module has presented higher failure probabilities for iodine, technetium, and strontium. For iodine, the estimated probability is 16% for 300 yr and 96% for 1000 yr. The basis for performance evaluation of the deposition system is the understanding of its gradual evolution. There are many uncertainty sources in this modeling, and efforts in this direction are strongly recommended.