Harish Agarwal
General Electric
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Harish Agarwal.
45th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics & Materials Conference | 2004
Harish Agarwal; John E. Renaud; Jason C. Lee; Layne T. Watson
Reliability based design optimization is a methodology for nding optimized designs that are characterized with a low probability of failure. Primarily, reliability based design optimization consists of optimizing a merit function while satisfying reliability constraints. The reliability constraints are constraints on the probability of failure corresponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a reliability analysis. During the last few years, a variety of dierent formulations have been developed for reliability based design optimization. Traditionally, these have been formulated as a doubleloop (nested) optimization problems. The upper level optimization loop generally involves optimizing a merit function subject to reliability constraints and the lower level optimization loop(s) compute the probabilities of failure corresponding to the failure mode(s) that govern the system failure. This formulation is, by nature, computationally intensive. Researchers have provided sequential strategies to address this issue, where the deterministic optimization and reliability analysis are decoupled, and the process is performed iteratively until convergence is achieved. These methods, though attractive in terms of obtaining a workable reliable design at considerably reduced computational costs, often lead to premature convergence and therefore, yield spurious optimal designs. In this paper, a novel unilevel formulation for reliability based design optimization is developed. In the proposed formulation, the lower level optimization (evaluation of reliability constraints in the double-loop formulation) is replaced by its corresponding rst order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the upper level optimization. Such a replacement is equivalent to solving the original nested optimization if the constraint qualication conditions are satised. It is shown through the use of test problems that the proposed formulation is numerically robust (stable) and computationally ecient compared to the existing approaches for reliability based design optimization.
AIAA Journal | 2006
Harish Agarwal; John E. Renaud
Traditionally, reliability-based design optimization (RBDO) has been formulated as a nested optimization problem. The inner loop, generally, involves the solution to optimization problems for computing the probabilities of failure of the critical failure modes, and the outer loop performs optimization by varying the decision variables. Such formulations are by nature computationally intensive, requiring numerous function and constraint evaluations. To alleviate this problem, researchers have developed iterative decoupled RBDO approaches. These methods perform deterministic optimization and reliability assessment in a sequential manner until a consistent reliability-based design is obtained. The sequential methods are attractive because a consistent reliable design can be obtained at considerably lower computational cost. However, the designs obtained by using these decoupled approaches cannot guarantee production of the true solution. A new decoupled method for RBDO is developed in this investigation. Postoptimal sensitivities of the most probable point (MPP) of failure with respect to the decision variables are introduced to update the MPPs during the deterministic optimization phase of the proposed approach. A damped Broyden‐Fletcher‐Goldfarb‐Shanno method is used to significantly reduce the cost of obtaining these sensitivities. It is the use of postoptimal sensitivities that differentiates this new decoupled RBDO approach from previous efforts.
44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2003
Harish Agarwal; John E. Renaud; Evan L. Preston
In the last few decades, the exponential growth in computational performance capability have led to the development of large-scale simulation tools for design. Systems designed using such simulation tools can fail in service if the uncertainty of the simulation tool’s performance predictions is not accounted for. This research focuses on how uncertainty can be quantified in multidisciplinary systems analysissubject to epistemic uncertainty associated with the disciplinary design tools and input parameters. Evidence theory is used to quantify uncertainty in terms of the uncertain measures of belief and plausibility. After the uncertainty has been quantified mathematically, the designer seeks the optimum design under uncertainty. The measures of uncertainty provided by evidence theory are discontinuous functions. Such nonsmooth functions cannot be used in traditional gradientbased optimizers because the sensitivities of the uncertain measures are not properly defined. In this research surrogate models are used to represent the uncertain measures as continuous functions. A formal trust region managed sequential approximate optimization approach is used to drive the optimization process. The trust region is managed by a trust region ratio based on the performance of the Lagrangian. The Lagrangian is a penalty function of the objective and the constraints. The methodology is illustrated in application to multidisciplinary test problems.
Structure and Infrastructure Engineering | 2006
Shawn E. Gano; John E. Renaud; Harish Agarwal; Andres Tovar
Competitive marketplaces have driven the need for simulation-based design optimization to produce efficient and cost-effective designs. However, such design practices typically do not take into account model uncertainties or manufacturing tolerances. Such designs may lie on failure-driven constraints and are characterized by a high probability of failure. Reliability-based design optimization (RBDO) methods have been developed to obtain designs that optimize a merit function while ensuring a target reliability level is satisfied. Unfortunately, these methods are notorious for the high computational expense they require to converge. In this research variable-fidelity methods are used to reduce the cost of RBDO. Variable-fidelity methods use a set of models with varying degrees of fidelity and computational expense to aid in reducing the cost of optimization. The variable-fidelity RBDO methodology developed in this investigation is demonstrated on two test cases: a nonlinear analytic problem and a high-lift airfoil design problem. For each of these problems the proposed method shows considerable savings for performing RBDO as compared with standard approaches.
43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2002
Harish Agarwal; John E. Renaud
This paper investigates reliability based design optimization (RBDO) using response surface approximations (RSA) 1;2 for multidisciplinary design optimization (MDO). In RBDO the constraints are variational (reliability based) since the design variables and the system parameters can have variation and can be subjected to uncertainties 18 . For these problems the objective is to minimize a cost function while satisfying reliability based constraints. This class of problems is referred to as reliability based multidisciplinary design optimization (RBMDO) problems 5 . The reliability constraints, which can be formulated in terms of the reliability indices or in terms of the probability of failure, themselves represent an optimization problem and can be very expensive to evaluate for large scale multidisciplinary problems. Response surface approximations of the constraints are used in estimating the reliability indices or probability of failure when solving an approximate optimization problem using FORM. In this research RSAs are integrated within RBDO to significantly reduce the computational cost of traditional RBDO. The proposed methodology is compared to traditional RBDO in application to multidisciplinary test problems, and the computational savings and benefits are discussed.
46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference | 2005
Andres Tovar; Neal M. Patel; John E. Renaud; Harish Agarwal
The hybrid cellular automaton (HCA) method has been successfully applied to topology optimization using a uniform strain energy density distribution approach. In this work, a new set of design rules is derived from the first order optimality conditions of a multiobjective problem. In this new formulation, the final topology is proved to minimize both mass and strain energy. In the HCA algorithm, local design rules based on the CA paradigm are used to efficiently drive the design to optimality. In addition to the control-based techniques previously introduced, a new ratio technique is derived in this investigation. This work also compares the performance of the control strategies and the ratio technique.
ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2010
Srikanth Akkaram; Harish Agarwal; Amit Kale; Liping Wang
The use of model-based simulation in engineering often necessitates the need to estimate model parameters based on physical experiments or field data. This class of problems is referred to as inverse problems in the literature and two significant challenges based on the application of inverse modeling technology to practical engineering problems are (a) computational cost of the inverse solution for complex transient simulation models that needed a long time to execute (b) ability of the instrumentation to shed light on the model parameters being estimated. This paper develops a methodology for the use of transient meta-modeling techniques for data matching applications to address the computational efficiency. The transient meta-models are constructed using the SVD/PCA approach to identify the key transient signature patterns from a dimension reduction perspective. Accuracy of the inverse modeling method with the direct simulation model and the meta-model are compared. The paper concludes with a methodology to optimally design an experiment and collect data to improve the nature of the inverse problem and the confidence with which the model parameters are estimated.Copyright
49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference <br> 16th AIAA/ASME/AHS Adaptive Structures Conference<br> 10t | 2008
Harish Agarwal; Srikanth Akkaram; Swapnil Shetye; Al McCallum
This paper describes a reduced order model (ROM) developed to predict changes in gas turbine tip clearance – the radial distance between the end of the blade and the stator case. The clearance is estimated by modeling the growth of the sub-components during engine operating conditions. Gas turbine clearances vary significantly during different engine startup and shutdown conditions because of time constant mismatch between interacting sub-components (e.g. rotor and stator). The ROM is developed based on full-fidelity finite element simulation data and can predict the clearance variation as a function of engine thermodynamic conditions. As a result of their real-time execution capability, these models can be used for preliminary design, clearance control, and operational variation studies. The methodology is demonstrated on transient high-pressure compressor stator growth and high-pressure turbine transient clearance data.
Volume 1: Aircraft Engine; Ceramics; Coal, Biomass and Alternative Fuels; Manufacturing, Materials and Metallurgy; Microturbines and Small Turbomachinery | 2008
Harish Agarwal; Amit Kale; Srikanth Akkaram; Mahadevan Balasubramaniam; Susan Ebacher; Paul Gilleberto
A framework demonstrating the application of inverse modeling technology for engine performance data matching is presented. Transient aero-thermodynamic cycle models are used to simulate engine performance and control characteristics over the entire flight envelope. These models are used not only for engine design and certification but also to provide performance guarantees to the customer and for engine diagnostics. Therefore, it is extremely important that these models are able to accurately predict the performance metrics of interest. Accuracy of these models can be improved by fine-tuning model parameters so that the model output best matches the flight test data. The performance of an aircraft engine is fine tuned from several sensor observations, e.g. exhaust gas temperature, fuel flow, and fan speed. These observations vary with parameters like power level, core speed and operating conditions like altitude, inlet conditions (temperature and pressure), and Mach number, and are used in conjunction with a transient performance simulation model to assess engine performance. This is normally achieved through an iterative manual approach that requires a lot of expert judgment. Simulating transient performance characteristics often requires an engineer to estimate model parameters by matching model response to engine sensor data. Such an estimation problem can be posed using inverse modeling technology. One of the main challenges in the application of inverse modeling for parameter estimation is that the problem can be ill-posed that leads to instability and non-uniqueness of the solution. The inverse method employed here for parameter estimation provides a solution for both well-posed and ill-posed problems. Sensitivity analysis can be used to better pose the data-matching problem. Singular value decomposition (SVD) technique is used to address the ill-posed nature of the inverse problem, which is solved as a finite dimensional non-linear optimization problem. Typically, the transient response is highly nonlinear and it may not be possible to match the whole transient simultaneously. This paper extends the framework on transient inverse modeling developed in [1] for engine transient performance applications. Variable weighting mechanism allows providing different weights to different sensors. This helps in better control on data matching, identify drift in parameter values over time, and point towards incorrect modeling assumptions. The application of the inverse methodology is demonstrated on a single spool non-afterburning engine and a commercial aviation engine model.Copyright
49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference <br> 16th AIAA/ASME/AHS Adaptive Structures Conference<br> 10t | 2008
Amit Kale; Harish Agarwal; Srikanth Akkaram; Mahadevan Balasubramaniam; Susan Ebacher; Paul Gilleberto
The paper develops a framework for application of inverse modeling techniques to develop accurate simulation models for aircraft engine performance characteristics. Typically, the performance of an aircraft engine is fine tuned from several sensor observations, e.g. exhaust gas temperature, fuel flow and fan speed. These observations vary with parameters like power level, core speed and operating conditions like ambient temperature, pressure and MACH number, and are used in conjunction with a transient performance simulation model to assess engine performance. Transient aero-thermodynamic cycle models have been developed to simulate engine performance and control characteristics over the entire flight envelope. Accuracy of these models can be improved by fine-tuning model parameters so that the model output best matches the flight test data. The application of inverse modeling for parameter estimation for transient data matching is challenging for two reasons. Firstly, the problem can be ill posed leading to instability and non-uniqueness of the solution. The Singular Value Decomposition (SVD) technique employed in this paper for parameter estimation provides a solution for both well-posed and ill-posed inverse problems, which is solved as a finite dimensional non-linear optimization problem [1]. Secondly, the transient response of an engine is highly non-linear and it may not be possible to match the entire transient regime accurately with a given set of model parameters. The transient weighting capability developed in this paper overcomes this difficulty by doing selective data matching over a specified region of interest. The application of the inverse methodology is demonstrated on a single spool non-afterburning engine and other aviation engine models.