Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joshua Mullins is active.

Publication


Featured researches published by Joshua Mullins.


Reliability Engineering & System Safety | 2014

Variable-fidelity model selection for stochastic simulation

Joshua Mullins; Sankaran Mahadevan

This paper presents a model selection methodology for maximizing the accuracy in the predicted distribution of a stochastic output of interest subject to an available computational budget. Model choices of different resolutions/fidelities such as coarse vs. fine mesh and linear vs. nonlinear material model are considered. The proposed approach makes use of efficient simulation techniques and mathematical surrogate models to develop a model selection framework. The model decision is made by considering the expected (or estimated) discrepancy between model prediction and the best available information about the quantity of interest, as well as the simulation effort required for the particular model choice. The form of the best available information may be the result of a maximum fidelity simulation, a physical experiment, or expert opinion. Several different situations corresponding to the type and amount of data are considered for a Monte Carlo simulation over the input space. The proposed methods are illustrated for a crack growth simulation problem in which model choices must be made for each cycle or cycle block even within one input sample.


Reliability Engineering & System Safety | 2016

Separation of aleatory and epistemic uncertainty in probabilistic model validation

Joshua Mullins; You Ling; Sankaran Mahadevan; Lin Sun; Alejandro Strachan

This paper investigates model validation under a variety of different data scenarios and clarifies how different validation metrics may be appropriate for different scenarios. In the presence of multiple uncertainty sources, model validation metrics that compare the distributions of model prediction and observation are considered. Both ensemble validation and point-by-point approaches are discussed, and it is shown how applying the model reliability metric point-by-point enables the separation of contributions from aleatory and epistemic uncertainty sources. After individual validation assessments are made at different input conditions, it may be desirable to obtain an overall measure of model validity across the entire domain. This paper proposes an integration approach that assigns weights to the validation results according to the relevance of each validation test condition to the overall intended use of the model in prediction. Since uncertainty propagation for probabilistic validation is often unaffordable for complex computational models, surrogate models are often used; this paper proposes an approach to account for the additional uncertainty introduced in validation by the uncertain fit of the surrogate model. The proposed methods are demonstrated with a microelectromechanical system (MEMS) example.


54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2013

Probabilistic Integration of Validation and Calibration Results for Prediction Level Uncertainty Quantification: Application to Structural Dynamics

Joshua Mullins; Chenzhao Li; Shankar Sankararaman; Sankaran Mahadevan; Angel Urbina

In many application domains, it is impossible or impractical to perform full-scale experiments on a system in the regime of interest. Instead, data may be collected via experiments at lower levels of the system on individual materials, components, or subsystems. As a result, physics-based models are the only available tool at the full system level and must be used to make predictions in the regime of interest. This scenario poses difficulty when attempting to characterize the uncertainty in the model prediction since no further data is available. This problem has commonly been addressed by characterizing aleatory uncertainty in input variables with probability distributions and then performing calibration to characterize epistemic uncertainty stemming from parameter uncertainty. Calibrated models are then validated by any of a number of available metrics by utilizing a new set of independent data, typically at a higher level of the system hierarchy. Once the models are validated at the subsystem level, the input variability and parameter uncertainty can be propagated through a system-level model to make a prediction in the regime of interest. Standard propagation techniques will yield a probability distribution for a quantity of interest which can then be used to perform reliability analysis and make system assessments. However, common implementations of this approach largely ignore model uncertainty, which may be a leading source of uncertainty in the prediction. The approach presented in this study accounts for the fact that models never perform perfectly in the validation activities. Model validation is typically posed as a single pass/fail decision, but it can instead be viewed probabilistically. The model reliability metric is utilized in this study because it readily allows this probabilistic treatment by computing the probability of model errors exceeding a specified tolerance. We assert that parameters which are calibrated from models which are later partially invalidated should not be fully carried forward to the prediction. Calibration can be performed jointly with all available subsystem models or by using only a subset of them. Thus, from a Bayesian calibration approach we can obtain several posterior distributions of the parameters in addition to the prior distribution. These parameter distributions can then be weighted by the corresponding model probabilities computed in the


SAE International Journal of Materials and Manufacturing | 2013

A Comparison of Methods for Representing and Aggregating Uncertainties Involving Sparsely Sampled Random Variables - More Results

Vicente J. Romero; Joshua Mullins; Laura Painton Swiler; Angel Urbina

This paper discusses the treatment of uncertainties corresponding to relatively few samples of random-variable quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse samples it is not practical to have a goal of accurately estimating the underlying variability distribution (probability density function, PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a desired percentage of the actual PDF, say 95% included probability, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the random-variable range corresponding to the desired percentage of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem an interesting and difficult one. In this paper the performance of five uncertainty representation techniques is characterized on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods exhibit significantly better overall performance than the others according to the objectives and performance measures emphasized.


Archive | 2014

Optimal Selection of Calibration and Validation Test Samples under Uncertainty.

Joshua Mullins; Chenzhao Li; Sankaran Mahadevan; Angel Urbina

Frequently, important system properties that parameterize system models must be inferred from experimental data in a calibration problem. The quality of the resulting parameter description is directly linked to the quality and quantity of data. Because economic considerations and experimental limitations often lead to data that is sparse and/or imprecise (sources of epistemic uncertainty), this research will characterize parameter uncertainty within a probabilistic framework. A hierarchical system structure will be considered where data may be collected at different levels of varying complexity and cost (e.g. material, component, subsystem, etc.), leading to a tradeoff decision of cost vs. importance of data. In addition to calibration, data is also needed within the proposed framework to perform probabilistic model validation to characterize model uncertainty. This uncertainty will be carried through the system parameters, thereby increasing the conservatism in the prediction because of the presence of imperfect models. This research proposes a constrained discrete optimization formulation for selecting among the candidate data types (calibration or validation and at what level) to determine the respective quantities of observations to collect in order to best quantify parameter uncertainty and model uncertainty (which propagate to prediction uncertainty).


16th AIAA Non-Deterministic Approaches Conference | 2014

Options for the inclusion of model discrepancy in Bayesian calibration

You Ling; Joshua Mullins; Sankaran Mahadevan

One of the challenging issues in the well-known Kennedy and O’Hagan framework for Bayesian calibration is to formulate the prior of model discrepancy function, which can significantly affect the results of calibration. In the absence of physical knowledge on model inadequacy, it is often not clear how to construct a suitable prior, whereas an inappropriate selection of prior may lead to biased or useless parameter estimation. Aiming to address the uncertainty arising from the selection of a particular prior, this paper conducts an extensive study on possible formulations of model discrepancy function, and proposes a three-step (calibration, validation, and combination) approach in order to inform the decision on the construction of model discrepancy priors. In the validation step, a reliability-based metric is used to evaluate the predictions based on calibrated model parameters and discrepancy in the validation domain. The validation metric serves as a quantitative measure of how well the discrepancy formulation captures the physics missing in the model. In the combination step, the posterior distributions of model parameter and discrepancy corresponding to different priors are combined into a single distribution based on the probabilistic weights derived from the validation step. The combined distribution acknowledges the uncertainty in the prior formulation of model discrepancy function.


Journal of Verification, Validation and Uncertainty Quantification | 2016

Bayesian Uncertainty Integration for Model Calibration, Validation, and Prediction

Joshua Mullins; Sankaran Mahadevan

This paper proposes a comprehensive approach to prediction under uncertainty by application to the Sandia National Laboratories verification and validation challenge problem. In this problem, legacy data and experimental measurements of different levels of fidelity and complexity (e.g., coupon tests, material and fluid characterizations, and full system tests/measurements) compose a hierarchy of information where fewer observations are available at higher levels of system complexity. This paper applies a Bayesian methodology in order to incorporate information at different levels of the hierarchy and include the impact of sparse data in the prediction uncertainty for the system of interest. Since separation of aleatory and epistemic uncertainty sources is a pervasive issue in calibration and validation, maintaining this separation in order to perform these activities correctly is the primary focus of this paper. Toward this goal, a Johnson distribution family approach to calibration is proposed in order to enable epistemic and aleatory uncertainty to be separated in the posterior parameter distributions. The model reliability metric approach to validation is then applied, and a novel method of handling combined aleatory and epistemic uncertainty is introduced. The quality of the validation assessment is used to modify the parameter uncertainty and add conservatism to the prediction of interest. Finally, this prediction with its associated uncertainty is used to assess system-level reliability (a prediction goal for the challenge problem).


Archive | 2013

A comparison of methods for representing sparsely sampled random quantities.

Vicente J. Romero; Laura Painton Swiler; Angel Urbina; Joshua Mullins

This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.


16th AIAA Non-Deterministic Approaches Conference | 2014

Computational Effort vs. Accuracy Tradeoff in Uncertainty Quantification

Joshua Mullins; Sankaran Mahadevan

This paper presents a methodology for improving the efficiency of uncertainty quantification by making selections among a set of available candidate models. The goal is to maximize the accuracy in the predicted distribution of an output of interest (subject to a computational budget) by evaluating variable fidelity models at different points in the domain. The proposed approach uses surrogate models to estimate the discrepancy between available models and the best available information (in the form of high-fidelity simulation, experiment, or expert opinion). This discrepancy is typically expected to be larger for lower fidelity, cheaper models, so the tradeoff decision between discrepancy and computational effort is explored, and a methodology for addressing this issue is proposed. The selection decisions are made in a different manner for each of the aforementioned data (or subjective information) scenarios that are considered. The proposed methods are illustrated for a crack growth simulation problem in which model choices must be made for each cycle or cycle block even within one input sample.


Journal of Computational Physics | 2014

Selection of model discrepancy priors in Bayesian calibration

You Ling; Joshua Mullins; Sankaran Mahadevan

Collaboration


Dive into the Joshua Mullins's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angel Urbina

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

You Ling

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar

Laura Painton Swiler

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Vicente J. Romero

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge