Ramesh Rebba
Vanderbilt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ramesh Rebba.
Reliability Engineering & System Safety | 2005
Sankaran Mahadevan; Ramesh Rebba
This paper proposes a methodology based on Bayesian statistics to assess the validity of reliability computational models when full-scale testing is not possible. Sub-module validation results are used to derive a validation measure for the overall reliability estimate. Bayes networks are used for the propagation and updating of validation information from the sub-modules to the overall model prediction. The methodology includes uncertainty in the experimental measurement, and the posterior and prior distributions of the model output are used to compute a validation metric based on Bayesian hypothesis testing. Validation of a reliability prediction model for an engine blade under high-cycle fatigue is illustrated using the proposed methodology.
Reliability Engineering & System Safety | 2006
Ramesh Rebba; Sankaran Mahadevan; Shuping Huang
This paper develops a Bayesian methodology for assessing the confidence in model prediction by comparing the model output with experimental data when both are stochastic. The prior distribution of the response is first computed, which is then updated based on experimental observation using Bayesian analysis to compute a validation metric. A model error estimation methodology is then developed to include model form error, discretization error, stochastic analysis error (UQ error), input data error and output measurement error. Sensitivity of the validation metric to various error components and model parameters is discussed. A numerical example is presented to illustrate the proposed methodology.
Reliability Engineering & System Safety | 2008
Ramesh Rebba; Sankaran Mahadevan
This paper investigates various statistical approaches for the validation of computational models when both model prediction and experimental observation have uncertainties, and proposes two new methods for this purpose. The first method utilizes hypothesis testing to accept or reject a model at a desired significance level. Interval-based hypothesis testing is found to be more practically useful for model validation than the commonly used point null hypothesis testing. Both classical and Bayesian approaches are investigated. The second and more direct method formulates model validation as a limit state-based reliability estimation problem. Both simulation-based and analytical methods are presented to compute the model reliability for single or multiple comparisons of the model output and observed data. The proposed methods are illustrated and compared using numerical examples.
AIAA Journal | 2005
Ramesh Rebba; Sankaran Mahadevan
*† This paper develops methods to address important decision-making issues in the use of model-based simulation for reliability analysis and design. When computational models are developed, the assumptions and approximations introduce various types of errors in the code predictions. In order to accept the model prediction with confidence, the computational models need to be rigorously verified and validated. While model prediction has uncertainty, validation experiments also have measurement errors. Thus model validation involves comparing predictions with test data when both are uncertain. A validation metric based on Bayesian hypothesis testing is presented and the method is extended to consider multiple response quantities or a single model response at different spatial and temporal points. Another decision in the use of model-based prediction is to extrapolate the validation inference in the tested region to an inference about the predictive capability in the untested region of actual application, and to establish a certain confidence in the extrapolation being performed. A methodology is developed to propagate inferences across domains using Bayesian networks. The proposed methods are illustrated for application to structural dynamics problems.
Reliability Engineering & System Safety | 2006
Ramesh Rebba; Sankaran Mahadevan
This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading.
Journal of Mechanical Design | 2006
Sankaran Mahadevan; Ramesh Rebba
This paper proposes a methodology to estimate errors in computational models and to include them in reliability-based design optimization (RBDO). Various sources of uncertainties, errors, and approximations in model form selection and numerical solution are considered. The solution approximation error is quantified based on the model itself, using the Richardson extrapolation method. The model form error is quantified based on the comparison of model prediction with physical observations using an interpolated resampling approach. The error in reliability analysis is also quantified and included in the RBDO formulation. The proposed methods are illustrated through numerical examples.
International Journal of Materials & Product Technology | 2006
Ramesh Rebba; Shuping Huang; Yongming Liu; Sankaran Mahadevan
This paper investigates various statistical methodologies for validating simulation models in automotive design. Validation metrics to compare model prediction with experimental observation, when there is uncertainty in both, are developed. Two types of metrics based on Bayesian analysis and principal components analysis are proposed. The validation results are also compared with those obtained from classical hypothesis testing. A fatigue life prediction model for composite materials and a residual stress prediction model for a spot-welded joint are validated, using the proposed methodology.
design automation conference | 2010
Dorin Drignei; Zissimos P. Mourelatos; Ramesh Rebba
Sensitivity analysis and computer model calibration are generally treated as two separate topics. In sensitivity analysis one quantifies the effect of each input factor on outputs, whereas in calibration one finds the values of input factors that provide the best match to a set of field data. In this paper we show a connection between these two seemingly separate concepts, and illustrate it with an automotive industry application involving a Road Load Acquisition Data (RLDA) computer model. We use global sensitivity analysis for computer models with transient responses to screen out inactive input parameters and make the calibration algorithm numerically more stable. Because the computer model can be computationally intensive, we construct a fast statistical surrogate for the computer model with transient responses. This fast surrogate is used for both sensitivity analysis and RLDA computer model calibration.Copyright
47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference<BR> 14th AIAA/ASME/AHS Adaptive Structures Conference<BR> 7th | 2006
Ramesh Rebba; Sankaran Mahadevan; Nashville Tn
*† Model uncertainties and limited experimental data in many engineering applications make the model validation essentially a statistical exercise. This paper investigates various statistical methods to quantitatively assess how close the prediction is to the observation. Interval-based hypothesis testing is found to be more practically useful than point null hypothesis testing. Both classical and Bayesian statistical techniques are implemented for this purpose. Also, a more direct approach is proposed by formulating model validation as a reliability estimation problem. The model reliability metric is extended for validating models with multivariate outputs. The proposed methods are illustrated and compared using numerical examples.
SAE transactions | 2003
Ramesh Rebba; Sankaran Mahadevan; Ruoxue Zhang
This paper proposes new methods to assess the validity of reliability prediction models through a Bayesian approach. The concept of Bayesian hypothesis testing is extended to system-level problems where full-scale testingis impossible. Component-level validation results are used to derive a system-level validation measure. This derivation depends on the knowledge of interrelationships between component modules. Bayes networks are used for the propagation of validation information from the component-level to system-level. Validation of reliability prediction model for a single degree of freedom oscillator under high-cycle fatigue and fatigue life prediction of a helicopter rotor hub is illustrated for this purpose.