Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shankar Sankararaman is active.

Publication


Featured researches published by Shankar Sankararaman.


Reliability Engineering & System Safety | 2011

Likelihood-based representation of epistemic uncertainty due to sparse point data and/or interval data

Shankar Sankararaman; Sankaran Mahadevan

This paper presents a likelihood-based methodology for a probabilistic representation of a stochastic quantity for which only sparse point data and/or interval data may be available. The likelihood function is evaluated from the probability density function (PDF) for sparse point data and the cumulative distribution function for interval data. The full likelihood function is used in this paper to calculate the entire PDF of the distribution parameters. The uncertainty in the distribution parameters is integrated to calculate a single PDF for the quantity of interest. The approach is then extended to non-parametric PDFs, wherein the entire distribution can be discretized at a finite number of points and the probability density values at these points can be inferred using the principle of maximum likelihood, thus avoiding the assumption of any particular distribution. The proposed approach is demonstrated with challenge problems from the Sandia Epistemic Uncertainty Workshop and the results are compared with those of previous studies that pursued different approaches to represent and propagate interval description of input uncertainty.


Reliability Engineering & System Safety | 2013

Separating the contributions of variability and parameter uncertainty in probability distributions

Shankar Sankararaman; Sankaran Mahadevan

This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters.


Reliability Engineering & System Safety | 2011

Model validation under epistemic uncertainty

Shankar Sankararaman; Sankaran Mahadevan

Abstract This paper develops a methodology to assess the validity of computational models when some quantities may be affected by epistemic uncertainty. Three types of epistemic uncertainty regarding input random variables – interval data, sparse point data, and probability distributions with parameter uncertainty – are considered. When the model inputs are described using sparse point data and/or interval data, a likelihood-based methodology is used to represent these variables as probability distributions. Two approaches – a parametric approach and a non-parametric approach – are pursued for this purpose. While the parametric approach leads to a family of distributions due to distribution parameter uncertainty, the principles of conditional probability and total probability can be used to integrate the family of distributions into a single distribution. The non-parametric approach directly yields a single probability distribution. The probabilistic model predictions are compared against experimental observations, which may again be point data or interval data. A generalized likelihood function is constructed for Bayesian updating, and the posterior distribution of the model output is estimated. The Bayes factor metric is extended to assess the validity of the model under both aleatory and epistemic uncertainty and to estimate the confidence in the model prediction. The proposed method is illustrated using a numerical example.


Journal of Mechanical Design | 2012

Likelihood-Based Approach to Multidisciplinary Analysis Under Uncertainty

Shankar Sankararaman; Sankaran Mahadevan

This paper proposes a new methodology for uncertainty quantification in systems that require multidisciplinary iterative analysis between two or more coupled component models. This methodology is based on computing the probability of satisfying the interdisciplinary compatibility equations, conditioned on specific values of the coupling (or feedback) variables, and this information is used to estimate the probability distributions of the coupling variables. The estimation of the coupling variables is analogous to likelihood-based parameter estimation in statistics and thus leads to the proposed likelihood approach for multidisciplinary analysis (LAMDA). Using the distributions of the feedback variables, the coupling can be removed in any one direction without loss of generality, while still preserving the mathematical relationship between the coupling variables. The calculation of the probability distributions of the coupling variables is theoretically exact and does not require a fully coupled system analysis. The proposed method is illustrated using a mathematical example and an aerospace system application—a fire detection satellite.


AIAA Journal | 2013

Test Resource Allocation in Hierarchical Systems Using Bayesian Networks

Shankar Sankararaman; Kyle McLemore; Sankaran Mahadevan; Samuel Case Bradford; Lee Peterson

This paper develops analytical methods for test resource allocation that aid in reducing the uncertainty in the system model prediction for multilevel and multidisciplinary systems. The various component, subsystem, and system-level model predictions; the corresponding inputs and calibration parameters; test data; and model and measurement errors are connected efficiently using a Bayesian network. This provides a unified framework for uncertainty analysis where test data can be integrated along with computational simulations. The Bayesian network is used in an inverse problem where the model parameters of multiple subsystems are calibrated simultaneously. This leads to a decrease in the variance of the model parameters, and hence, in the variance of the overall system performance prediction. An optimization-based procedure is then used for test resource allocation using the Bayesian network, and those tests that can effectively reduce the uncertainty in the system model prediction are identified. The prop...


54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2013

Probabilistic Integration of Validation and Calibration Results for Prediction Level Uncertainty Quantification: Application to Structural Dynamics

Joshua Mullins; Chenzhao Li; Shankar Sankararaman; Sankaran Mahadevan; Angel Urbina

In many application domains, it is impossible or impractical to perform full-scale experiments on a system in the regime of interest. Instead, data may be collected via experiments at lower levels of the system on individual materials, components, or subsystems. As a result, physics-based models are the only available tool at the full system level and must be used to make predictions in the regime of interest. This scenario poses difficulty when attempting to characterize the uncertainty in the model prediction since no further data is available. This problem has commonly been addressed by characterizing aleatory uncertainty in input variables with probability distributions and then performing calibration to characterize epistemic uncertainty stemming from parameter uncertainty. Calibrated models are then validated by any of a number of available metrics by utilizing a new set of independent data, typically at a higher level of the system hierarchy. Once the models are validated at the subsystem level, the input variability and parameter uncertainty can be propagated through a system-level model to make a prediction in the regime of interest. Standard propagation techniques will yield a probability distribution for a quantity of interest which can then be used to perform reliability analysis and make system assessments. However, common implementations of this approach largely ignore model uncertainty, which may be a leading source of uncertainty in the prediction. The approach presented in this study accounts for the fact that models never perform perfectly in the validation activities. Model validation is typically posed as a single pass/fail decision, but it can instead be viewed probabilistically. The model reliability metric is utilized in this study because it readily allows this probabilistic treatment by computing the probability of model errors exceeding a specified tolerance. We assert that parameters which are calibrated from models which are later partially invalidated should not be fully carried forward to the prediction. Calibration can be performed jointly with all available subsystem models or by using only a subset of them. Thus, from a Bayesian calibration approach we can obtain several posterior distributions of the parameters in addition to the prior distribution. These parameter distributions can then be weighted by the corresponding model probabilities computed in the


54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2013

Assessing the Reliability of Computational Models under Uncertainty

Shankar Sankararaman; Sankaran Mahadevan

This paper investigates the use of the model reliability approach for validating computational models in the presence of both aleatory and epistemic uncertainty. The model reliability approach was originally developed to compare the mean of the model prediction against mean of the experimental data, in the presence of aleatory uncertainty. This paper extends this methods to (1) include both aleatory and epistemic uncertainty; and (2) account for the entire probability distributions of model prediction and experimental data. Two different types of data are considered for validation: 1) well-characterized data where output measurements and their corresponding input measurements are available, and (2) uncharacterized experiments where output measurements are alone available, and the uncertainty in the inputs is known. Different types of uncertainty – physical variability in the inputs and model parameters, data uncertainty in the form of sparse and interval data, and measurement errors in both the input and the output are included in the model validation procedure. The proposed methods are illustrated using a model which is intended to predict the energy dissipation in a lap joint under impact loading.


Inverse Problems in Science and Engineering | 2012

Model parameter estimation with imprecise and unpaired data

Shankar Sankararaman; Sankaran Mahadevan

This article presents statistical approaches for model parameter estimation when the input and output measurements are imprecise in nature. Several types of imprecision are discussed: (1) In classical parameter estimation problems, paired input–output data points are used for model calibration. In reality, point data may be available at the input and output levels but they may not be paired, i.e. input and output measurements are independent of each other and there is no correspondence between them. (2) Data available for calibration may be in the form of intervals (range of values) rather than point values. This article investigates both least squares-based and likelihood-based approaches for model calibration in the presence of such imprecise and unpaired data. The likelihood-based approach makes fewer assumptions than the least squares-based approach, especially when aggregating information from multiple interval data. Though the likelihood-based approach is more expensive, it can compute the entire probability distribution of the model parameters, and facilitate further uncertainty propagation after parameter estimation. The proposed methods are illustrated using non-linear structural dynamics experimental data on energy dissipation due to friction in a lap joint, under impact loading.


Archive | 2012

Bayesian Methods for Uncertainty Quantification in Multi-level Systems

Shankar Sankararaman; Kyle McLemore; Sankaran Mahadevan

This paper develops a Bayesian methodology for uncertainty quantification and test resource allocation in multi-level systems. The various component, subsystem, and system-level models, the corresponding parameters, and the model errors are connected efficiently using a Bayes network. This provides a unified framework for uncertainty analysis where test data can be integrated along with computational models and simulations. The Bayes network is useful in two ways: (1) in a forward problem where the various sources of uncertainty are propagated through multiple levels of modeling to predict the overall uncertainty in the system response; and (2) in an inverse problem where the model parameters of multiple subsystems are calibrated simultaneously using test data. The calibration procedure leads to a decrease in the variance of the model parameters, and hence, in the overall system performance prediction. Then the Bayes network is used for test resource allocation where an optimization-based procedure is used to identify tests that can effectively reduce the uncertainty in the system model prediction are identified. The proposed methods are illustrated using two numerical examples: a multi-level structural dynamics problem and a multi-disciplinary thermally induced vibration problem.


50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2009

Uncertainty Quantification in Structural Health Monitoring

Shankar Sankararaman; Sankaran Mahadevan

Structural health monitoring (SHM) involves detection, isolation and quantification of damage in engineering systems. This paper focuses on methods for the quantification of uncertainty in each of these procedures, in the context of continuous online monitoring. Sources of uncertainty include, but are not limited to, physical variability, measurement uncertainty and model uncertainty. Damage detection is based on statistical hypothesis testing whose uncertainty can be captured easily. Isolation is based on the comparison of fault signatures and sometimes the observed signatures may not uniquely isolate faults, thus causing uncertainty in fault isolation. A metric based on least squares is proposed to assess the confidence in fault isolation, when the fault signatures fail to isolate faults uniquely. The uncertainty in damage quantification is evaluated through statistical non-linear regression and confidence limits for the damage parameter are calculated. The procedures are then illustrated using two types of example problems, a structural frame and a hydraulic actuation system.

Collaboration


Dive into the Shankar Sankararaman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

You Ling

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angel Urbina

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lee Peterson

Jet Propulsion Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge