Marko Čepin
University of Ljubljana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marko Čepin.
Reliability Engineering & System Safety | 2009
Andrija Volkanovski; Marko Čepin; Borut Mavko
A new method for power system reliability analysis using the fault tree analysis approach is developed. The method is based on fault trees generated for each load point of the power system. The fault trees are related to disruption of energy delivery from generators to the specific load points. Quantitative evaluation of the fault trees, which represents a standpoint for assessment of reliability of power delivery, enables identification of the most important elements in the power system. The algorithm of the computer code, which facilitates the application of the method, has been applied to the IEEE test system. The power system reliability was assessed and the main contributors to power system reliability have been identified, both qualitatively and quantitatively.
Reliability Engineering & System Safety | 2002
Marko Čepin; Borut Mavko
Abstract The fault tree analysis is a widely used method for evaluation of systems reliability and nuclear power plants safety. This paper presents a new method, which represents extension of the classic fault tree with the time requirements. The dynamic fault tree offers a range of risk informed applications. The results show that application of dynamic fault tree may reduce the system unavailability, e.g. by the proper arrangement of outages of safety equipment. The findings suggest that dynamic fault tree is a useful tool to expand and upgrade the existing models and knowledge obtained from probabilistic safety assessment with additional and time dependent information to further reduce the plant risk.
Reliability Engineering & System Safety | 2002
Marko Čepin
Abstract Testing and maintenance activities of safety equipment in nuclear power plants are an important potential for risk and cost reduction. An optimization method is presented based on the simulated annealing algorithm. The method determines the optimal schedule of safety equipment outages due to testing and maintenance based on minimization of selected risk measure. The mean value of the selected time dependent risk measure represents the objective function of the optimization. The time dependent function of the selected risk measure is obtained from probabilistic safety assessment, i.e. the fault tree analysis at the system level and the fault tree/event tree analysis at the plant level, both extended with inclusion of time requirements. Results of several examples showed that it is possible to reduce risk by application of the proposed method. Because of large uncertainties in the probabilistic safety assessment, the most important result of the method may not be a selection of the most suitable schedule of safety equipment outages among those, which results in similarly low risk. But, it may be a prevention of such schedules of safety equipment outages, which result in high risk. Such finding increases the importance of evaluation speed versus the requirement of getting always the global optimum no matter if it is only slightly better that certain local one.
Reliability Engineering & System Safety | 2008
Andrija Volkanovski; Borut Mavko; Tome Boševski; Anton Čauševski; Marko Čepin
A new method for optimisation of the maintenance scheduling of generating units in a power system is developed. Maintenance is scheduled to minimise the risk through minimisation of the yearly value of the loss of load expectation (LOLE) taken as a measure of the power system reliability. The proposed method uses genetic algorithm to obtain the best solution resulting in a minimal value of the annual LOLE value for the power system in the analysed period. The operational constraints for generating units are included in the method. The proposed algorithm was tested on a Macedonian power system and the obtained results were compared with the results received from the approximate methodology. The results show the improved reliability of a power system with the maintenance schedule obtained by the new method compared to the results from the approximate methodology.
Reliability Engineering & System Safety | 2005
Marko Čepin
A truncation limit defines the boundaries of what is considered in the probabilistic safety assessment and what is neglected. The truncation limit that is the focus here is the truncation limit on the size of the minimal cut set contribution at which to cut off. A new method was developed, which defines truncation limit in probabilistic safety assessment. The method specifies truncation limits with more stringency than presenting existing documents dealing with truncation criteria in probabilistic safety assessment do. The results of this paper indicate that the truncation limits for more complex probabilistic safety assessments, which consist of larger number of basic events, should be more severe than presently recommended in existing documents if more accuracy is desired. The truncation limits defined by the new method reduce the relative errors of importance measures and produce more accurate results for probabilistic safety assessment applications. The reduced relative errors of importance measures can prevent situations, where the acceptability of change of equipment under investigation according to RG 1.174 would be shifted from region, where changes can be accepted, to region, where changes cannot be accepted, if the results would be calculated with smaller truncation limit.
Reliability Engineering & System Safety | 2008
Marko Čepin
A consideration of dependencies between human actions is an important issue within the human reliability analysis. A method was developed, which integrates the features of existing methods and the experience from a full scope plant simulator. The method is used on real plant-specific human reliability analysis as a part of the probabilistic safety assessment of a nuclear power plant. The method distinguishes dependency for pre-initiator events from dependency for initiator and post-initiator events. The method identifies dependencies based on scenarios, where consecutive human actions are modeled, and based on a list of minimal cut sets, which is obtained by running the minimal cut set analysis considering high values of human error probabilities in the evaluation. A large example study, which consisted of a large number of human failure events, demonstrated the applicability of the method. Comparative analyses that were performed show that both selection of dependency method and selection of dependency levels within the method largely impact the results of probabilistic safety assessment. If the core damage frequency is not impacted much, the listings of important basic events in terms of risk increase and risk decrease factors may change considerably. More efforts are needed on the subject, which will prepare the background for more detailed guidelines, which will remove the subjectivity from the evaluations as much as it is possible.
Reliability Engineering & System Safety | 2006
Sebastián Martorell; Sofía Carlos; José F. Villanueva; Ana Sánchez; Blas Galván; Daniel Salazar; Marko Čepin
This paper presents the development and application of a double-loop Multiple Objective Evolutionary Algorithm that uses a Multiple Objective Genetic Algorithm to perform the simultaneous optimization of periodic Test Intervals (TI) and Test Planning (TP). It takes into account the time-dependent effect of TP performed on stand-by safety-related equipment. TI and TP are part of the Surveillance Requirements within Technical Specifications at Nuclear Power Plants. It addresses the problem of multi-objective optimization in the space of dependable variables, i.e. TI and TP, using a novel flexible structure of the optimization algorithm. Lessons learnt from the cases of application of the methodology to optimize TI and TP for the High-Pressure Injection System are given. The results show that the double-loop Multiple Objective Evolutionary Algorithm is able to find the Pareto set of solutions that represents a surface of non-dominated solutions that satisfy all the constraints imposed on the objective functions and decision variables. Decision makers can adopt then the best solution found depending on their particular preference, e.g. minimum cost, minimum unavailability.
Archive | 2011
Marko Čepin
A binary decision diagram is a directed acyclic graph that consists of nodes and edges. It deals with Boolean functions. A binary decision diagram consists of a set of decision nodes, starting at the root node at the top of the decision diagram. Each decision node contains two outgoing branches, one is a high branch and the other is a low branch. These branches may be represented as solid and dotted lines, respectively. The binary decision diagram contains high and low branches that are used to connect decision nodes with each other to create decision paths. The high and low branches of the final decision nodes are connected to either a high- or low-terminal node, which represents the output of the function. The development of examples of binary decision diagrams is presented in text and in figures. Shannon decomposition is explained. The conversion of a fault tree to a binary decision diagram is shown.
Reliability Engineering & System Safety | 2002
Marko Čepin; Sebastián Martorell
Abstract Allowed outage time (AOT) is the maximum time for which certain safety equipment can be put out of the operation without the plant is put in a safer operating state. A method for risk informed evaluation of AOTs is developed, which enables consideration of a set of plant configurations in the evaluation. The method bases on risk measures obtained from probabilistic safety assessment, e.g. conditional change of core damage frequency considering selected plant configurations. The results of selected examples show that better methods and more data included into the models may reduce the conservatism in the evaluations and may contribute to increased flexibility about decisions on AOT.
Reliability Engineering & System Safety | 2012
Duško Kančev; Marko Čepin
Abstract Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.