Hany S. Abdel-Khalik
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hany S. Abdel-Khalik.
Nuclear Science and Engineering | 2008
Hany S. Abdel-Khalik; Paul J. Turinsky; Matthew Anderson Jessee
Abstract This paper introduces the concepts and derives the mathematical theory of efficient subspace methods (ESMs) applied to the simulation of large-scale complex models, of which nuclear reactor simulation will serve as a test basis. ESMs are intended to advance the capabilities of predictive simulation to meet the functional requirements of future energy system simulation and overcome the inadequacies of current design methods. Some of the inadequacies addressed by ESM include lack of rigorous approach to perform comprehensive validation of the multitudes of models and input data used in the design calculations and lack of robust mathematical approaches to enhance fidelity of existing and advanced computational codes. To accomplish these tasks, the computational tools must be capable of performing the following three applications with both accuracy and efficiency: (a) sensitivity analysis of key system attributes with respect to various input data; (b) uncertainty quantification for key system attributes; and (c) adaptive simulation, also known as data assimilation, for adapting existing models based on the assimilated body of experimental information to achieve the best possible prediction accuracy. These three applications, involving large-scale computational models, are now considered computationally infeasible if both the input data and key system attributes or experimental information fields are large. This paper will develop the mathematical theory of ESM-based algorithms for these three applications. The treatment in this paper is based on linearized approximation of the associated computational models. Extension to higher-order approximations represents the focus of our ongoing research.
Nuclear Technology | 2012
Chris Kennedy; Cristian Rabiti; Hany S. Abdel-Khalik
Generalized perturbation theory (GPT) has been recognized as the most computationally efficient approach for performing sensitivity analysis for models with many input parameters, which renders forward sensitivity analysis computationally overwhelming. In critical systems, GPT involves the solution of the adjoint form of the eigenvalue problem with a response-dependent fixed source. Although conceptually simple to implement, most neutronics codes that can solve the adjoint eigenvalue problem do not have a GPT capability unless envisioned during code development. We introduce in this manuscript a reduced-order modeling approach based on subspace methods that requires the solution of the fundamental adjoint equations but allows the generation of response sensitivities without the need to set up GPT equations, and that provides an estimate of the error resulting from the reduction. Moreover, the new approach solves the eigenvalue problem independently of the number or type of responses. This allows for an efficient computation of sensitivities when many responses are required. This paper introduces the theory and implementation details of the GPT-free approach and describes how the errors could be estimated as part of the analysis. The applicability is demonstrated by estimating the variations in the flux distribution everywhere in the phase space of a fast critical sphere and a high-temperature gas-cooled reactor prismatic lattice. The variations generated by the GPT-free approach are benchmarked to the exact variations generated by direct forward perturbations.
Nuclear Technology | 2012
Zeyun Wu; Qiong Zhang; Hany S. Abdel-Khalik
A new variant of a hybrid Monte Carlo-deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted as the SUBSPACE approach, improves the selection of the importance maps in order to reduce the computational overhead required to achieve global variance reduction - that is, the uniform reduction of variance everywhere in the phase-space. The intended applications are reactor analysis problems where detailed responses for all fuel assemblies are required everywhere in the reactor core. Like FW-CADIS, the SUBSPACE approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. Unlike FW-CADIS, the SUBSPACE approach does not employ flux-based weighting of the adjoint source term. Instead, it utilizes pseudoresponses generated with random weights to help identify the correlations between the importance maps that could be used to reduce the computational time required for global variance reduction. Numerical experiments, serving as proof of principle, are presented to compare the SUBSPACE and FW-CADIS approaches in terms of the global reduction in standard deviation and the associated figures of merit for representative nuclear reactor assembly and core models.
Nuclear Science and Engineering | 2011
Matthew Anderson Jessee; Paul J. Turinsky; Hany S. Abdel-Khalik
Abstract Computational capability has been developed to adjust multigroup neutron cross sections, including self-shielding correction factors, to improve the fidelity of boiling water reactor (BWR) core modeling and simulation. The method involves propagating multigroup neutron cross-section uncertainties through various BWR computational models to evaluate uncertainties in key core attributes such as core keff, nodal power distributions, thermal margins, and in-core detector readings. Uncertainty-based inverse theory methods are then employed to adjust multigroup cross sections to minimize the disagreement between BWR core modeling predictions and observed (i.e., measured) plant data. For this paper, observed plant data are virtually simulated in the form of perturbed three-dimensional nodal power distributions with the perturbations sized to represent actual discrepancies between predictions and real plant data. The major focus of this work is to efficiently propagate multigroup neutron cross-section uncertainty through BWR lattice physics and core simulator calculations. The data adjustment equations are developed using a subspace approach that exploits the ill-conditioning of the multigroup cross-section covariance matrix to minimize computation and storage burden. Tikhonov regularization is also employed to improve the conditioning of the data adjustment equations. Expressions are also provided for posterior covariance matrices of both the multigroup cross-section and core attributes uncertainties.
Journal of Nuclear Science and Technology | 2017
Ryota Katano; Tomohiro Endo; Akio Yamamoto; Mohammad Abdo; Hany S. Abdel-Khalik
ABSTRACT We propose an estimation method of sensitivity coefficients of core neutronics parameters based on a multi-level reduced-order modeling approach. The idea is to use lower-level models to identify the dominant input parameter variations, constrained to the so-called active subspace, which are employed to determine the sensitivity coefficients of the core neutronic parameters. In our implementation, the lower-level model is represented by two-dimensional assembly calculations, which are employed in the preparation of the few-group cross-sections for core-wide calculations. The active subspace basis is estimated using the singular value decomposition of sensitivity matrices of assembly neutronics parameters. In numerical verification calculation, sensitivity coefficients of core characteristics for a typical three-loop PWR equilibrium-cycle are estimated using the proposed method and the direct method. Comparison of these two results shows that the proposed method well reproduces the results obtained by the direct method with lower calculation costs. Through the verification calculations, applicability of the proposed method to practical light water reactor analysis is confirmed.
2017 25th International Conference on Nuclear Engineering | 2017
Dongli Huang; Hany S. Abdel-Khalik
This work aims to develop an uncertainty analysis methodology for the propagation and quantification of the effects of nuclear cross-section uncertainties on important core-wide attributes, such as power distribution and core critical eigenvalue. Given the computationally taxing nature of this endeavor, our goal is to develop a methodology capable of preserving the accuracy of brute force sampling techniques for uncertainty quantification while realizing the efficiency of deterministic techniques. To achieve that, a reduced order modeling (ROM) approach is proposed to deal with the enormous size of the uncertainty space, comprising all the cross-section few-group parameters required in core-wide simulation. The idea is to generate a compressed representation of the uncertainty space, as represented by a covariance matrix, that renders sampling techniques computationally a feasible option for quantifying and prioritizing the various sources of uncertainties.While the proposed developments are general to any reactor physics computational sequence, we customize our approach to the NESTLE [1]-TRITON [2] computational sequence, which will serve as a demonstrative tool for the implementation of our approach. NESTLE is a software used for core wide simulation, which relies on the few-group cross-sections to calculate core wide attributes over multiple cycles of depletion. Its input cross-sections are generated using a matrix of conditions evaluated using a lattice physics code, which in our implementation is done using the TRITON software of the ORNL’ SCALE suit. This manuscript presents one of the early steps towards this goal. Specifically, we focus here on the development of the algorithms for determining the reduced dimension of covariance matrix. Numerical experiment using the TRITON software is employed to demonstrate how the reduction is achieved.Copyright
Archive | 2011
Paul J. Turinsky; Hany S. Abdel-Khalik; Tracy E. Stover
An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertaintymorexa0» reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment to the design concept is quantitatively determined. A technique is then established to assimilate this data and produce posteriori uncertainties on key attributes and responses of the design concept. Several experiment perturbations based on engineering judgment are used to demonstrate these methods and also serve as an initial generation of the optimization problem. Finally, an optimization technique is developed which will simultaneously arrive at an optimized experiment to produce an optimized reactor design. Solution of this problem is made possible by the use of the simulated annealing algorithm for solution of optimization problems. The optimization examined in this work is based on maximizing the reactor cost savings associated with the modified design made possible by using the design margin gained through reduced basic nuclear data uncertainties. Cost values for experiment design specifications and reactor design specifications are established and used to compute a total savings by comparing the posteriori reactor cost to the a priori cost plus the cost of the experiment. The optimized solution arrives at a maximized cost savings.«xa0less
Transactions of the american nuclear society | 2010
Hany S. Abdel-Khalik; Ralph A. Nelson; Brian M. Adams
Nuclear Engineering and Design | 2015
Youngsuk Bang; Hany S. Abdel-Khalik; Matthew Anderson Jessee; Ugur Mertyurek
Archive | 2012
Zeyun Wu; Chris Kennedy; Hany S. Abdel-Khalik