Nicola Pedroni
Université Paris-Saclay
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicola Pedroni.
IEEE Transactions on Power Systems | 2013
Yan-Fu Li; Nicola Pedroni; Enrico Zio
A multi-objective power unit commitment problem is framed to consider simultaneously the objectives of minimizing the operation cost and minimizing the emissions from the generation units. To find the solution of the optimal schedule of the generation units, a memetic evolutionary algorithm is proposed, which combines the non-dominated sorting genetic algorithm-II (NSGA-II) and a local search algorithm. The power dispatch sub-problem is solved by the weighed-sum lambda-iteration approach. The proposed method has been tested on systems composed by 10 and 100 generation units for a 24-hour demand horizon. The Pareto-optimal front obtained contains solutions of different trade off with respect to the two objectives of cost and emission, which are superior to those contained in the Pareto-front obtained by the pure NSGA-II. The solutions of minimum cost are shown to compare well with recent published results obtained by single-objective cost optimization algorithms.
Reliability Engineering & System Safety | 2009
Enrico Zio; Nicola Pedroni
Abstract Thermal-hydraulic (T-H) passive systems play a crucial role in the development of future solutions for nuclear power plant technologies. A fundamental issue still to be resolved is the quantification of the reliability of such systems. The difficulty comes from the uncertainties in the evaluation of their performance, because of the lack of experimental and operational data and of validated models of the phenomena involved. The uncertainties concern the deviations of the underlying physical principles from the expected T-H behaviour, due to the onset of physical phenomena infringing the system performance or to changes in the initial/boundary conditions of system operation. In this work, some insights resulting from a survey on the technical issues associated with estimating the reliability of T-H passive systems in the context of nuclear safety are first provided. It is concluded that the most realistic assessment of the passive system response to the uncertain accident conditions can be achieved by Monte-Carlo (MC) sampling of the system uncertain parameters followed by the simulation of the accident evolution by a detailed mechanistic T-H code. This procedure, however, requires considerable and often prohibitive computational efforts for achieving acceptable accuracies, so that a limitation on the MC sample size, i.e. on the number of code runs, is necessarily forced onto the analysis. As a consequence, it becomes mandatory to provide quantitative measures of the uncertainty of the computed estimates. To this aim, two classes of statistical methods are proposed in the paper to quantify, in terms of confidence intervals, the uncertainties associated with the reliability estimates. The first method is based on the probability laws of the binomial distribution governing the stochastic process of system success or failure. The second method is founded on the concept of bootstrapping, suitable to assess the accuracy of estimators when no prior information on their distributions is available. To the authors’ knowledge, it is the first time that these methods are applied to quantitatively bracket the confidence on the estimates of the reliability of passive systems by MC simulation. The two methods are demonstrated by an application to a real passive system of literature.
Reliability Engineering & System Safety | 2009
Enrico Zio; Piero Baraldi; Nicola Pedroni
Abstract Power system generation scheduling is an important issue both from the economical and environmental safety viewpoints. The scheduling involves decisions with regards to the units start-up and shut-down times and to the assignment of the load demands to the committed generating units for minimizing the system operation costs and the emission of atmospheric pollutants. As many other real-world engineering problems, power system generation scheduling involves multiple, conflicting optimization criteria for which there exists no single best solution with respect to all criteria considered. Multi-objective optimization algorithms, based on the principle of Pareto optimality, can then be designed to search for the set of nondominated scheduling solutions from which the decision-maker (DM) must a posteriori choose the preferred alternative. On the other hand, often, information is available a priori regarding the preference values of the DM with respect to the objectives. When possible, it is important to exploit this information during the search so as to focus it on the region of preference of the Pareto-optimal set. In this paper, ways are explored to use this preference information for driving a multi-objective genetic algorithm towards the preferential region of the Pareto-optimal front. Two methods are considered: the first one extends the concept of Pareto dominance by biasing the chromosome replacement step of the algorithm by means of numerical weights that express the DM’ s preferences; the second one drives the search algorithm by changing the shape of the dominance region according to linear trade-off functions specified by the DM. The effectiveness of the proposed approaches is first compared on a case study of literature. Then, a nonlinear, constrained, two-objective power generation scheduling problem is effectively tackled.
Reliability Engineering & System Safety | 2012
Enrico Zio; Nicola Pedroni
Thermal-Hydraulic (T-H) passive safety systems are potentially more reliable than active systems, and for this reason are expected to improve the safety of nuclear power plants. However, uncertainties are present in the operation and modeling of a T-H passive system and the system may find itself unable to accomplish its function. For the analysis of the system functional failures, a mechanistic code is used and the probability of failure is estimated based on a Monte Carlo (MC) sample of code runs which propagate the uncertainties in the model and numerical values of its parameters/variables. Within this framework, sensitivity analysis aims at determining the contribution of the individual uncertain parameters (i.e., the inputs to the mechanistic code) to (i) the uncertainty in the outputs of the T-H model code and (ii) the probability of functional failure of the passive system. The analysis requires multiple (e.g., many hundreds or thousands) evaluations of the code for different combinations of system inputs: this makes the associated computational effort prohibitive in those practical cases in which the computer code requires several hours to run a single simulation. To tackle the computational issue, in this work the use of the Subset Simulation (SS) and Line Sampling (LS) methods is investigated. The methods are tested on two case studies: the first one is based on the well-known Ishigami function [1]; the second one involves the natural convection cooling in a Gas-cooled Fast Reactor (GFR) after a Loss of Coolant Accident (LOCA) [2].
Reliability Engineering & System Safety | 2012
Francesco Cadini; D. Avram; Nicola Pedroni; Enrico Zio
In this paper, we show an original application of the Subset Simulation (SS) technique on a model for the performance assessment of a near surface radioactive waste repository. The logic of the protective barriers of the repository is represented by a reliability model. The SS approach is founded on the idea that a small failure probability can be expressed as a product of larger conditional probabilities of some intermediate events; with a proper choice of the conditional events, the conditional probabilities can be sufficiently large to allow accurate estimation with a small number of samples. In the application, the method allows improving the efficiency of the random sampling for estimating the repository containment failure probability. Moreover, the peculiar set-partitioning scheme of the SS method is exploited for performing the analysis of the sensitivity of the failure probability estimate to the uncertain model parameters.
IEEE Transactions on Reliability | 2016
Yi-Ping Fang; Nicola Pedroni; Enrico Zio
In this paper, we propose two metrics, i.e., the optimal repair time and the resilience reduction worth, to measure the criticality of the components of a network system from the perspective of their contribution to system resilience. Specifically, the two metrics quantify: 1) the priority with which a failed component should be repaired and re-installed into the network and 2) the potential loss in the optimal system resilience due to a time delay in the recovery of a failed component, respectively. Given the stochastic nature of disruptive events on infrastructure networks, a Monte Carlo-based method is proposed to generate probability distributions of the two metrics for all of the components of the network; then, a stochastic ranking approach based on the Copelands pairwise aggregation is used to rank components importance. Numerical results are obtained for the IEEE 30-bus test network and a comparison is made with three classical centrality measures.
Archive | 2010
Enrico Zio; Nicola Pedroni
Monte Carlo simulation (MCS) offers a powerful means for evaluating the reliability of a system, due to the modeling flexibility that it offers indifferently of the type and dimension of the problem. The method is based on the repeated sampling of realizations of system configurations, which, however, are seldom of failure so that a large number of realizations must be simulated in order to achieve an acceptable accuracy in the estimated failure probability, with costly large computing times. For this reason, techniques of efficient sampling of system failure realizations are of interest, in order to reduce the computational effort.
Reliability Engineering & System Safety | 2016
Elisa Ferrario; Nicola Pedroni; Enrico Zio
In this paper, we present a methodological work that adopts a system-of-systems (SoS) viewpoint for the evaluation of the robustness of interdependent critical infrastructures (CIs). We propose a Hierarchical Graph representation, where the product flow is dispatched to the demand nodes in consideration of different priorities. We use a multi-state model to describe different degrees of degradation of the individual components, where the transitions between the different states of degradation occur stochastically. The quantitative evaluation of the CIs robustness is performed by Monte Carlo simulation. The methodological approach proposed is illustrated by way of two case studies: the first one concerns small-sized gas and electricity networks and a supervisory control and data acquisition (SCADA) system; the second one considers a moderately large power distribution network, adapted from the IEEE 123 node test feeders. The large size of the second case study requires hierarchical clustering for performing the robustness analysis.
IEEE Systems Journal | 2017
Yi-Ping Fang; Nicola Pedroni; Enrico Zio
In this paper, we tackle the problem of searching for the most favorable pattern of link capacity allocation that makes a power transmission network resilient to cascading failures with limited investment costs. This problem is formulated within a combinatorial multiobjective optimization framework and tackled by evolutionary algorithms. Two different models of increasing complexity are used to simulate cascading failures in a network and quantify its resilience: a complex network model [namely, the Motter–Lai (ML) model] and a more detailed and computationally demanding power flow model [namely, the ORNL–Pserc–Alaska (OPA) model]. Both models are tested and compared in a case study involving the 400-kV French power transmission network. The results show that cascade-resilient networks tend to have a nonlinear capacity–load relation: In particular, heavily loaded components have smaller unoccupied portions of capacity, whereas lightly loaded links present larger unoccupied portions of capacity (which is in contrast with the linear capacity–load relation hypothesized in previous works of literature). Most importantly, the optimal solutions obtained using the ML and OPA models exhibit consistent characteristics in terms of phrase transitions in the Pareto fronts and link capacity allocation patterns. These results provide incentive for the use of computationally cheap network-centric models for the optimization of cascade-resilient power network systems, given the advantages of their simplicity and scalability.
Risk Analysis | 2015
Yi-Ping Fang; Nicola Pedroni; Enrico Zio
Large-scale outages on real-world critical infrastructures, although infrequent, are increasingly disastrous to our society. In this article, we are primarily concerned with power transmission networks and we consider the problem of allocation of generation to distributors by rewiring links under the objectives of maximizing network resilience to cascading failure and minimizing investment costs. The combinatorial multiobjective optimization is carried out by a nondominated sorting binary differential evolution (NSBDE) algorithm. For each generators-distributors connection pattern considered in the NSBDE search, a computationally cheap, topological model of failure cascading in a complex network (named the Motter-Lai [ML] model) is used to simulate and quantify network resilience to cascading failures initiated by targeted attacks. The results on the 400 kV French power transmission network case study show that the proposed method allows us to identify optimal patterns of generators-distributors connection that improve cascading resilience at an acceptable cost. To verify the realistic character of the results obtained by the NSBDE with the embedded ML topological model, a more realistic but also more computationally expensive model of cascading failures is adopted, based on optimal power flow (namely, the ORNL-Pserc-Alaska) model). The consistent results between the two models provide impetus for the use of topological, complex network theory models for analysis and optimization of large infrastructures against cascading failure with the advantages of simplicity, scalability, and low computational cost.