Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chenzhao Li is active.

Publication


Featured researches published by Chenzhao Li.


Reliability Engineering & System Safety | 2016

An efficient modularized sample-based method to estimate the first-order Sobol׳ index

Chenzhao Li; Sankaran Mahadevan

Abstract Sobol׳ index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol׳ index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol׳ index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap.


Reliability Engineering & System Safety | 2016

Role of calibration, validation, and relevance in multi-level uncertainty integration

Chenzhao Li; Sankaran Mahadevan

Calibration of model parameters is an essential step in predicting the response of a complicated system, but the lack of data at the system level makes it impossible to conduct this quantification directly. In such a situation, system model parameters are estimated using tests at lower levels of complexity which share the same model parameters with the system. For such a multi-level problem, this paper proposes a methodology to quantify the uncertainty in the system level prediction by integrating calibration, validation and sensitivity analysis at different levels. The proposed approach considers the validity of the models used for parameter estimation at lower levels, as well as the relevance at the lower level to the prediction at the system level. The model validity is evaluated using a model reliability metric, and models with multivariate output are considered. The relevance is quantified by comparing Sobol indices at the lower level and system level, thus measuring the extent to which a lower level test represents the characteristics of the system so that the calibration results can be reliably used in the system level. Finally the results of calibration, validation and relevance analysis are integrated in a roll-up method to predict the system output.


Applied Intelligence | 2014

A non-parametric method to determine basic probability assignment for classification problems

Peida Xu; Xiaoyan Su; Sankaran Mahadevan; Chenzhao Li; Yong Deng

As an important tool for knowledge representation and decision-making under uncertainty, Dempster-Shafer evidence theory (D-S theory) has been used in many fields. The application of D-S theory is critically dependent on the availability of the basic probability assignment (BPA). The determination of BPA is still an open issue. A non-parametric method to obtain BPA is proposed in this paper. This method can handle multi-attribute datasets in classification problems. Each attribute value of the dataset sample is treated as a stochastic quantity. Its non-parametric probability density function (PDF) is calculated using the training data, which can be regarded as the probability model for the corresponding attribute. The BPA function is then constructed based on the relationship between the test sample and the probability models. The missing attribute values in datasets are treated as ignorance in the framework of the evidence theory. This method does not have the assumption of any particular distribution. As a result, it can be flexibly used in many engineering applications. The obtained BPA can avoid high conflict between evidence, which is desired in data fusion. Several benchmark classification problems are used to demonstrate the proposed method and to compare against existing methods. The constructed classifier based on the proposed method compares well to the state-of-the-art algorithms.


16th AIAA Non-Deterministic Approaches Conference | 2014

Uncertainty Quantification and Output Prediction in Multi-level Problems

Chenzhao Li; Sankaran Mahadevan

The calibration of model parameters is essential to predict the output of a complicated system, but the lack of data at the system level makes it impossible to conduct this quantification directly. This situation drives analysts to obtain information on model parameters using experimental data at lower levels of complexity which share the same model parameters with the system of interest. To solve this multi-level problem, this paper first conducts model calibration using lower level data and Bayesian inference to obtain the posterior distribution of each model parameter. However, lower level models are not perfect; thus model validation is also needed to evaluate the model that was used in model calibration. In the model validation, this paper extends the model reliability metric by using a stochastic representation of model reliability, and model with multivariate output is also considered. Another contribution of this paper is the consideration of physical relevance through sensitivity analysis, in order to measure the extent to which a lower level test represents the physical characteristics of the actual system of interest so that the calibration results can be extrapolated to the system level. Finally all the information from calibration, validation and relevance analysis is integrated to quantify the uncertainty in the system level prediction.


54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2013

Probabilistic Integration of Validation and Calibration Results for Prediction Level Uncertainty Quantification: Application to Structural Dynamics

Joshua Mullins; Chenzhao Li; Shankar Sankararaman; Sankaran Mahadevan; Angel Urbina

In many application domains, it is impossible or impractical to perform full-scale experiments on a system in the regime of interest. Instead, data may be collected via experiments at lower levels of the system on individual materials, components, or subsystems. As a result, physics-based models are the only available tool at the full system level and must be used to make predictions in the regime of interest. This scenario poses difficulty when attempting to characterize the uncertainty in the model prediction since no further data is available. This problem has commonly been addressed by characterizing aleatory uncertainty in input variables with probability distributions and then performing calibration to characterize epistemic uncertainty stemming from parameter uncertainty. Calibrated models are then validated by any of a number of available metrics by utilizing a new set of independent data, typically at a higher level of the system hierarchy. Once the models are validated at the subsystem level, the input variability and parameter uncertainty can be propagated through a system-level model to make a prediction in the regime of interest. Standard propagation techniques will yield a probability distribution for a quantity of interest which can then be used to perform reliability analysis and make system assessments. However, common implementations of this approach largely ignore model uncertainty, which may be a leading source of uncertainty in the prediction. The approach presented in this study accounts for the fact that models never perform perfectly in the validation activities. Model validation is typically posed as a single pass/fail decision, but it can instead be viewed probabilistically. The model reliability metric is utilized in this study because it readily allows this probabilistic treatment by computing the probability of model errors exceeding a specified tolerance. We assert that parameters which are calibrated from models which are later partially invalidated should not be fully carried forward to the prediction. Calibration can be performed jointly with all available subsystem models or by using only a subset of them. Thus, from a Bayesian calibration approach we can obtain several posterior distributions of the parameters in addition to the prior distribution. These parameter distributions can then be weighted by the corresponding model probabilities computed in the


AIAA Journal | 2017

Dynamic Bayesian Network for Aircraft Wing Health Monitoring Digital Twin

Chenzhao Li; Sankaran Mahadevan; You Ling; Sergio Choze; Liping Wang

Current airframe health monitoring generally relies on deterministic physics models and ground inspections. This paper uses the concept of a dynamic Bayesian network to build a versatile probabilis...


18th AIAA Non-Deterministic Approaches Conference | 2016

Robust Test Resource Allocation using Global Sensitivity Analysis

Chenzhao Li; Sankaran Mahadevan

To predict the response of a system with unknown parameters, a common route is to quantify the parameters using test data and propagate the results through a computational model of the system. Activities in this process may include model calibration and/or model validation. Test data value uncertainty has a significant effect on model calibration and model validation, and therefore affects the response prediction. Limited testing budget creates the challenge of test resource allocation, i.e., how to optimize the number of calibration and validation tests to be conducted. In this paper, a novel computational technique based on pseudo-random numbers is proposed to efficiently quantify the uncertainty in the data value of each type of test. This technique helps quantifying the contribution of data value uncertainty to the uncertainty in the prediction through Sobol indices. Consistent predictions using different sets of data are expected if this contribution is small. Then the numbers of each type of test are optimized to minimize this contribution. A simulated annealing algorithm is applied to solve this discrete optimization problem.


Archive | 2014

Optimal Selection of Calibration and Validation Test Samples under Uncertainty.

Joshua Mullins; Chenzhao Li; Sankaran Mahadevan; Angel Urbina

Frequently, important system properties that parameterize system models must be inferred from experimental data in a calibration problem. The quality of the resulting parameter description is directly linked to the quality and quantity of data. Because economic considerations and experimental limitations often lead to data that is sparse and/or imprecise (sources of epistemic uncertainty), this research will characterize parameter uncertainty within a probabilistic framework. A hierarchical system structure will be considered where data may be collected at different levels of varying complexity and cost (e.g. material, component, subsystem, etc.), leading to a tradeoff decision of cost vs. importance of data. In addition to calibration, data is also needed within the proposed framework to perform probabilistic model validation to characterize model uncertainty. This uncertainty will be carried through the system parameters, thereby increasing the conservatism in the prediction because of the presence of imperfect models. This research proposes a constrained discrete optimization formulation for selecting among the candidate data types (calibration or validation and at what level) to determine the respective quantities of observations to collect in order to best quantify parameter uncertainty and model uncertainty (which propagate to prediction uncertainty).


Archive | 2015

Sensitivity Analysis for Test Resource Allocation

Chenzhao Li; Sankaran Mahadevan

To predict the response of a system with unknown parameters, a common route is to quantify the parameters using test data and propagate the results through a computational model of the system. Activities in this process may include model calibration and/or model validation. Data uncertainty has a significant effect on model calibration and model validation, and therefore affects the response prediction. Data uncertainty includes the uncertainty regarding the amount of data and numerical values of data. Although its effect can be qualitatively observed by trying different data sets and visually comparing the response predictions, a quantitative methodology assessing the contributions of these two types of data uncertainty to the uncertainty in the response prediction is necessary in order to solve test resource allocation problems. In this paper, a novel computational technique based on pseudo-random numbers is proposed to efficiently quantify the uncertainty in the data value of each type of test. Then the method of auxiliary variable based on the probability integral transform theorem is applied to build a deterministic function so that variance-based global sensitivity analysis can be conducted. The resultant global sensitivity indices quantify the contribution of data value uncertainty of each type of test to the uncertainty in the response prediction. Thus a methodology for robust test resource allocation is proposed, i.e., quantifying the number of each type of tests so that the response predictions using different data set are consistent.


Reliability Engineering & System Safety | 2018

Efficient approximate inference in Bayesian networks with continuous variables

Chenzhao Li; Sankaran Mahadevan

Abstract Inference is one key objective in a Bayesian network (BN), and it aims to estimate the posterior distributions of state variables based on evidence (observations). While efficient analytical inference algorithms (either approximate or exact) for BN with discrete variables have been well-established in the literature, the inference in BN with continuous variables is still challenging if the BN is non-linear and/or non-Gaussian. In this case we can either discretize the continuous variable and utilize the inference approaches for discrete BN; or we have to use sampling-based methods such as MCMC for static BN and particle filter for dynamic BN. This paper proposes a network collapsing technique based on the concept of probability integral transform to convert a multi-layer BN to an equivalent simple two-layer BN, so that the unscented Kalman filter can be applied to the collapsed BN and the posterior distributions of state variables can be obtained analytically. For dynamic BN, the proposed method is also able to propagate the state variables to the next time step analytically using the unscented transform, based on the assumption that the posterior distributions of state variables are Gaussian. Thus the proposed method achieves a very fast approximate solution, making it particularly suitable for dynamic BN where inference and uncertainty propagation are required over many time steps.

Collaboration


Dive into the Chenzhao Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua Mullins

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Angel Urbina

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

You Ling

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyle Neal

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar

Zhen Hu

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Peida Xu

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge