Bayesian Modeling of Inconsistent Plastic Response due to Material Variability
Francesco Rizzi, Mohammad Khalil, Reese E. Jones, Jeremy A. Templeton, Jakob T. Ostien, Brad L. Boyce
BBayesian Modeling of Inconsistent Plastic Response due toMaterial Variability
F. Rizzi ∗ , M. Khalil, R.E. Jones † , J.A. Templeton, J.T. Ostien, B.L. Boyce Abstract
The advent of fabrication techniques such as additive manufacturing has focused attention on the con-siderable variability of material response due to defects and other microstructural aspects. This variabilitymotivates the development of an enhanced design methodology that incorporates inherent material vari-ability to provide robust predictions of performance. In this work, we develop plasticity models capableof representing the distribution of mechanical responses observed in experiments using traditional plas-ticity models of the mean response and recently developed uncertainty quantification (UQ) techniques.We demonstrate that the new method provides predictive realizations that are superior to more traditionalones, and how these UQ techniques can be used in model selection and assessing the quality of calibratedphysical parameters.
Variability of material response due to defects and other microstructural aspects has been well-known forsome time [Hill [1963], Nemat-Nasser [1999], McDowell et al. [2011], Mandadapu et al. [2012]]. Inmany engineering applications inherent material variability has been insignificant, and traditionally thedesign process is based on the mean or lower-bound response of the chosen materials. Material failureis a notable exception since it is particularly sensitive to outliers in the distributions of microstructuralfeatures [Dingreville et al. [2010], Battaile et al. [2015], Emery et al. [2015]]. Even in cases of materialfailure, traditional engineering design approaches have been able to successfully ignore material variabilitythrough the use of empirical safety factors. As the engineering community moves towards more physically-realistic and efficient designs while maintaining needed safety, it is necessary to replace these empiricalsafety factors with confidence bounds that account for actual material variability.Currently, additive manufacturing (AM) is of particular technological interest and provides strongmotivation to not only model the mean response of materials but also their intrinsic variability. Additivemanufacturing has the distinct advantages of being able to fabricate complex geometries and acceleratethe design-build-test cycle through rapid prototyping [Frazier [2014]]; however, currently, fabrication withthis technique suffers from variability in mechanical response due to various sources, including defectsimbued by the process, the formation of residual stresses, and geometric variation in the printed parts. Asan example, high throughput tensile data from Boyce et al. [Boyce et al. [2017]] clearly shows pronouncedvariability in the resultant yield and hardening.The current state of AM technology, and other manufacturing methods with intrinsic variability, e.g.nano- and bio-based, would clearly benefit from an enhanced design methodology accounting for thisvariability in order to meet performance thresholds with high confidence. In this work, we leverage tools ∗ Sandia National Laboratories, P.O. Box 969, Livermore, CA 94551, USA. † Corresponding Author Email: [email protected]. a r X i v : . [ phy s i c s . d a t a - a n ] A ug rom uncertainty quantification (UQ) [Le Maˆıtre and Knio [2010], Xiu [2010], Smith [2013]] to providematerial variability models, realizations, and, ultimately, robust performance predictions.It is well-known that any model is an approximation of the physical response of a real system. Typ-ically, models are characterized by many parameters, and thus appropriately tuning them becomes a keystep toward reliable predictions. The most common approach to model calibration is least-squares regres-sion which yields a deterministic result appropriate for design to the mean. Bayesian inference methodsprovide a more general framework for model calibration and parameter estimation by providing a robustframework for handling multiple sources of calibration information as well as a full joint probability den-sity on the target parameters. Traditionally, Bayesian techniques have been applied in conjunction withadditive noise models that are appropriate for modeling external, uncorrelated influences on observedresponses. Recently, a novel technique has been developed to embed the modeled stochasticity in distribu-tions on the physical parameters of the model itself [Sargsyan et al. [2015b]], and in this work we adapt itto model the inherent variability of an AM metal [Boyce et al. [2017]]. This is not the only method avail-able in this emerging field of probabilistic modeling of physical processes for engineering applications.There are commonalities between many of the methods. Notably, the work of Emery et al. [Emery et al.[2015]] applied the stochastic reduced order model (SROM) technique [Field et al. [2015]] to weld failure.The SROM technique has many of the basic components of the embedded noise model: a surrogate modelof the response to physical parameters, a means of propagating distributions of parameters with MonteCarlo (MC) sampling and computing realistic realizations of the predicted response.In practice, many plausible models exist to represent the trends in the noisy observational data. Typi-cally, these models aim to capture different aspects of the relevant physical phenomenon or use differentparadigms to represent similar aspects. Some of the available models may be overly-complex representa-tions of the system response in relation to the available observations. This issue is amplified by the use ofmodels to capture the modeling error in addition to the mean trends. In this context, the optimal model canbe obtained using Bayesian model selection, where optimality is measured in terms of data-fit and modelsimplicity [Beck [2010], Kass and Raftery [1995], Berger and Pericchi [1996], Verdinelli and Wasserman[1996]].In Sec.2 of this work, we describe the selected experimental dataset [Boyce et al. [2017]] that motivatesthis effort and provides calibration data. This deep dataset provides real-world relevance that a syntheticdataset would not; however, we apply some pre-processing and simplifying assumptions to facilitate thetask of developing the methodology. In Sec.3, we review the basic plasticity theory that provides the basisfor the material variability models developed in Sec. 4. In Sec. 4, we present the Bayesian framework formodel calibration and selection, including the formulation of the likelihood functions that are required inthis context. In particular, we adapt both the traditional additive error [Kennedy and O’Hagan [2001]] anda more novel embedded error [Sargsyan et al. [2015b]] model. In Sec.5, we provide the results relating tosurrogate model construction, parameter estimation and model selection with the available experimentaldata. In Sec. 6, we emphasize the innovations of the proposed approach to modeling the mechanicalresponse to microstructural material variability. We focus this work on the analysis of high-throughput, micro-tension experimental measurements of ad-ditively manufactured stainless steel. From the experiments of Boyce et al. [Boyce et al. [2017]], wehave six experimental data sets, each consisting of 120 stress-strain curves from the array of nominallyidentical dogbone-shaped specimens shown in Fig. 1(a). (The data from distinct builds of the array arereferred to as batches throughout the remainder of the manuscript.) Each stress-strain curve, as shown inFig. 1(b), is qualitatively similar and behaves in a classically elastic-plastic fashion; however, the materialdisplays a range of yield strengths, hardening and failure strengths and some variability in its apparentelastic properties. o simplify the data and remove some of the uncertainties associated more with the loading apparatusthan the material, we omit the pre-load cycle to approximately 0.2% strain. The remainder of the mechani-cal response is monotonic tensile loading at a constant strain rate, see Fig. 1(c). We associate the zero strainreference configuration of each sample with the zero-stress, mildly worked material resulting from the pre-load cycle. The resulting stress, σ , and strain, ε , values are derived from the customary engineering stressand strain formulas. It is important to note that the both stress and strain are dependent on measurementsof the specimen geometry. The gauge section is nominally 1mm × × ± ± N b = 6 batches of stress data with N i = { , , , , , } curves, respectively. To make the data suitablefor the inverse problem of parameter calibration, we interpolate each curve and extract n ε = 151 pointsover the interval (0,3)% to finally arrive at D = {D i } N b i =1 , with D i = {D ( k ) i } N i k =1 and D ( k ) i = { σ ( i,k ) j } n ε − j =0 , (1)where i enumerates the batches, k enumerates the N i stress curves within the i -th batch, and σ ( i,k ) j = σ ( i,k ) ( ε j ) represents the stress measured at the j -th strain value, ε j = 0 . j/ ( n ε − , for the k -th curveof the i -th batch. The resulting data set is shown in Fig. 1(c). To model the observed behavior which resembles classical von Mises plastic response, we adopt a stan-dard finite deformation framework [Simo [1988]] with a multiplicative decomposition of the deformationgradient, F , into elastic and plastic parts F = F e F p , (2)where F e is associated with lattice stretching and rotation, and F p is associated with plastic flow. Follow-ing Simo and Hughes [1998], we assume an additive stored energy potential written in terms of the elasticdeformation W = κ (cid:18)
12 ( J e − − log( J e ) (cid:19) + µ (cid:0) tr[¯ b e ] − (cid:1) . (3)Here, the elastic volumetric deformation is given by J e = det( F e ) = det( F ) since plastic flow is assumedto be isochoric, and the deviatoric elastic deformation is measured by ¯ b e = J − / e F e F Te . We associatethe elastic constants κ and µ with the bulk modulus and shear modulus, respectively, and relate them toYoung’s modulus, E , and Poisson’s ratio, ν , via the linear elastic relations κ = E − ν ) and µ = E ν ) .The Kirchhoff stress resulting from the derivative of the stored energy potential, W , is τ = κ J e − I + s with s = µ dev[¯ b e ] . (4)For the inelastic response, we employ a von Mises ( J ) yield condition between an effective stressderived from s = dev[ τ ] and an associated flow stress, Υ , as f := (cid:114) (cid:107) s (cid:107) − Υ ≤ . (5) Engineering Strain (%) E n g i n ee r i n g S t r e ss ( G P a ) (b) Engineering Strain (%) E n g i n ee r i n g S t r e ss ( G P a ) (c)Figure 1: (a) An array of nominally identical micro-tension “dogbone” specimens, (b) experimental datafrom Boyce et al. [Boyce et al. [2017]] color-coded by batch, and (c) the reduced data set used in this work. The rate independent, associative flow rule is written in the current configuration as the Lie derivative ofthe elastic left Cauchy-Green tensor ( cf.
Simo and Hughes [1998]) L v b e = − γ tr[ b e ] s (cid:107) s (cid:107) . (6)The Lagrange multiplier γ enforces consistency of the plastic flow with the yield surface, obeys the usualKuhn-Tucker conditions, and can be interpreted as the rate of plastic slip. Finally, we make the flow stress Υ(¯ (cid:15) p ) = Y + H ¯ (cid:15) p + K (1 − exp( − B ¯ (cid:15) p )) , (7)a function of the equivalent plastic strain ¯ (cid:15) p = (cid:114) (cid:90) t γdt, (8)and the following parameters: initial yield, Y ; linear hardening coefficient, H , and nonlinear exponen-tial saturation modulus K and exponent B . In tension, the yield strength, Y , determines the onset ofplasticity; the hardening coefficient H determines the linear trend of the post-yield behavior; and K , B superpose a more gradual transition in stress-strain from the trend determined by Young’s modulus E in he elastic regime to H in the plastic regime. These material parameters form the basis of our analysis ofmaterial variability. To be clear, this standard J plasticity model is a coarse-grained representation of themicrostructural variations that engender the variability in the mechanical response, with the plastic straincovering a wide variety of underlying inelastic mechanisms and the physical definitions of the materialparameters shaping our interpretation of the underlying causes of the variable response.We approximate the tensile test with a boundary value problem on a rectangular parallelepiped of thenominal gauge section with prescribed displacements on two opposing faces and traction free conditions onthe remaining faces to effect pure tension. Finite element simulations are performed in A LBANY [Salingeret al. [2016]] using the constitutive model described in this section. The engineering stress σ and strain ε corresponding to that measured in the experiments are recovered from the reaction forces, prescribeddisplacements, cross-sectional area and gauge length. In general, a calibration problem involves searching for the parameters θ of a given model that minimizethe difference between model predictions and observed data. In this work, we adopt a Bayesian approachto the calibration problem [Kennedy and O’Hagan [2001], Sivia [1996], Rizzi et al. [2012, 2013], Sargsyanet al. [2015b], Marzouk et al. [2007a]]. In contrast to least-squares fitting resulting in a single set of pa-rameter values, in a Bayesian perspective the parameters are considered random variables with associatedprobability density functions (PDFs) that incorporate both prior knowledge and measured data. The choiceof Bayesian methods is well motivated by the data, which agree with the chosen model to a high degreebut uncertainty is present in the model parameters both within and across all batches. Bayesian calibra-tion results in a joint posterior probability density of the parameters p ( θ |D , M ) that best fits the availableobservations D given the model choice M . The parametric uncertainty reflected in the posterior PDFdepends on the consistency of the model with the data and the amount of data. We aim to quantify thematerial variability using this probabilistic framework and physical interpretations of the parameters. Consider our model M for the engineering stress σ = M ( ε ; θ ) , being a full finite element plasticity modelwith an underlying plasticity model described in Section 3, where ε is the independent variable and θ isthe vector containing physical parameters { E, Y, H, K, B } as well as auxiliary and nuisance parametersas will be defined later. By setting { H, K, B } or { K, B } to zero we can form a nested sequence of modelswith 2, 3, or 5 parameters with perfect plastic, linear hardening, or saturation hardening phenomenology,respectively. Given that we only have one dimensional tension data, we fix the Poisson’s ratio ν = 0 . ;however, we allow the Young’s modulus, E , to vary, so that the locus of yield points is not constrained toa line. We also allow for geometric variability through a non-dimensional cross-section correction factor, A , that influences the model output σ = A × M ( ε ; θ ) and include A in θ . This geometric correctionis motivated by the fact that the observed engineering stress data was computed using an average cross-sectional areas based on the outer dimensions of each sample, and can be interpreted as the ratio of theeffective load bearing area of the sample to its measured average area. The correction factor A aims tomitigate the effect of utilizing one nominal value for cross-sectional area per sample rather than utilizing amore accurate spatially-varying area profile for each sample and the fact that the outer dimensions lead toan overestimate of actual load bearing area of the AM tensile specimens due to the physical imperfectionsdiscussed in Sec. 2. Also, since the gauge length, and hence the strain, are relatively error free, it is notincluded in the calibration parameters.In a Bayesian setting for model calibration and selection, Bayes’ rule is used to relate the informationcontained in the data and prior assumptions to the parameters in the form of a posterior probability density unction as p ( θ |D , M ) = p ( D| θ , M ) p ( θ | M ) p ( D| M ) . (9)Here p ( D| θ , M ) is the likelihood of observing the data D given the parameters θ and model M , p ( θ | M ) is the prior density on the parameters reflecting our knowledge before incorporating the observations, and p ( D| M ) is the model evidence (which we will compute for model selection purposes). It is important tonote that the denominator is typically ignored when sampling from the posterior since it is a normalizingfactor, independent of θ , that ensures the posterior PDF to integrate to unity; however, this term, knownas the model evidence , plays a central role in model selection, as will be described later. In this context,we will employ relatively uninformative uniform prior densities due to lack of prior knowledge of themodel parameters in the present context of the response of AM tensile specimens. Experimental datainfluences the resulting posterior probability only through the likelihood p ( D| θ , M ) , which is based onsome normalized measure of the distance between the data D and the model predictions M ( ε ; θ ) . Thelikelihood plays an analogous role to the cost/objective function in traditional fitting/optimization in thesense that it describes the misfit between model predictions and observational data. Specific forms ofthe likelihood will be discussed in Sec.4.3. As Eq. (9) suggests, the outcome is conditioned on the modelchosen, leading to questions regarding model comparison and selection which will be discussed in Sec.4.4.In general, given the complexities of the model M , the posterior density p ( θ |D , M ) is not known in closedform and one has to resort to numerical methods to evaluate it. Markov chain Monte Carlo (MCMC)methods [Gamerman and Lopes [2006], Berg and Billoire [2008]] provide a suitable way to sample fromthe posterior density, while kernel density estimation, for example, can be used to provide subsequentestimates of the posterior PDF. The sampling of the parameters’ posterior probability density and the evaluation of the Bayesian modelevidence involves many evaluations of the computational model. Since the finite-element based forwardmodel is relatively expensive to query (each tension simulation takes approximately 1 cpu-hour), the in-verse problem of parameter estimation and model selection via direct evaluation becomes infeasible. In-stead, we construct inexpensive-to-evaluate, accurate surrogates for the response of interest using polyno-mial chaos expansion (PCE) [Wiener [1938], Ghanem and Spanos [1991], Xiu and Karniadakis [2002]].Marzouk et al. [Marzouk et al. [2007b]] have shown that such surrogates can be effectively constructedusing UQ techniques with a presumed uniform density on the parameters’ range of interest.Since rough bounds of each of the parameters can be estimated from the data and knowledge of similarmaterials, we represent the unknown parameters θ using a spectral PCE in terms of a set of independentand identically distributed standard uniform random variables ξ ∼ [ − , d θ , as in θ ( ξ ) = P θ (cid:88) i =0 θ i Ψ i ( ξ ) , (10)where d θ represents the dimensionality of θ , Ψ I ( ξ ) are the orthogonal PC basis elements (Legendrepolynomials in this case), and P θ defines the number of terms in the expansion. Eq. (10) is, essentially,a linear transformation that maps standard uniform random variables to the unknown parameters overtheir range of interest. A corresponding expansion of the model response, acting as a polynomial-basedsurrogate model, can be written as M ( ε j ; ξ ) ≈ P M (cid:88) i =0 σ i ( ε j )Ψ i ( ξ ) . (11)and is constructed as functions of the physical parameters, { E, Y, H, K, B } at each value of engineeringstrain ε j at which data is available. The PC coefficients for the inputs, θ i , and outputs, σ i ( ε j ) , of the odel can be obtained, using one of two approaches: Galerkin projection or stochastic collocation. Weutilize a non-intrusive stochastic collocation method with regression [Xiu [2010]] to estimate the unknownPC coefficients as it does not require any modification to the existing computational models/simulators.Details on this procedure are given in [Le Maˆıtre and Knio [2010]], with an application of PCE surrogatemodeling with computationally intensive numerical models in [Khalil et al. [2015]]. As mentioned, the likelihood is the term in Bayes’ rule, Eq. (9), that accounts for the data. Physicalreasoning based on data is available and its relationship with the model prediction helps to formulate thelikelihood. From the discussion in Sec. 2, one can argue that the batches are independent. Furthermore,within a given batch, we assume all the N i stress-strain curves are independent since each experiment is aself-contained test, performed on separate specimens, i.e. the variability of each specimen is the result ofits specific microstructure.We consider two different formulations of the likelihood, which lead to different formulations of theinverse problem, and hence models of the material variability. The formulations differ by how they accountfor measurement noise and other variability, and how they are affected by (systematic) model discrepan-cies. Since each formulation leads to qualitatively different predictions, interpretations, and realizations,we are interested in how each is able to discriminate material variability from other sources of random-ness. In this section and in the Results section we will discuss how, given that plastic strain is a coarsemetric of the inelastic deformation in additively manufactured materials, discrepancies between the ob-served data and the model predictions can be interpreted physically. The results in Sec. 5 will illustratehow the posterior responds to the quantity of data and its variability. Consider the k -th stress-strain curve from the i -th batch which consists of a sequence of stress observations { σ ( i,k ) j } n ε − j =0 obtained at the strain locations { ε j } n ε − j =0 . A widely-adopted approach is to express thediscrepancy between an observation and surrogate model prediction using an additive noise model, as in σ ( i,k ) j = M ( ε j ; θ ) + η ( i,k ) j , (12)where { η ( i,k ) j } n ε − j =0 is the ( i, k ) sample from the set of random variables { η j } n ε − j =0 capturing the dis-crepancy between observations and model predictions at a given ε j . This formulation is predicated on theassumption that the model M ( ε ; θ ) accurately represents the true, physical process occurring with fixed,but unknown, parameters. This a strong assumption (and one of the main deficiencies of this approach)since models are, in general, only approximations of observed behavior. Nevertheless, this is a commonlyused method due to its simplicity.In lieu of a completely characterized measurement error model (which is rarely obtained in practice),it is reasonable and expedient to assume the errors are distributed in an independent and identically dis-tributed (i.i.d.) manner, with an assumed Gaussian probability density of zero mean (unbiased) and avariance parameterized by ς . This yields the likelihood: p ( D ( k ) i | θ , M ) = n ε − (cid:89) j =0 (2 πς ) − / exp (cid:32) − ( σ ( i,k ) j − M ( ε j ; θ )) ς (cid:33) , (13)where we recall that D ( k ) i represents the stress observations collected from the k -th stress-strain curveof the i -th batch. With the assumption that each experimental curve is independent from the others, thecombined likelihood takes the form p ( D i | θ , M ) = n ε − (cid:89) j =0 N i (cid:89) k =1 (2 πς ) − / exp (cid:32) − ( σ ( i,k ) j − M ( ε j ; θ )) ς (cid:33) . (14) or the full data set of the i -th batch.The standard deviation ς can be either fixed in advance (based on prior knowledge) or inferred alongwith the other unknown parameters θ . Moreover, it can be assumed to be either constant or varyingwith the strain value. In this context, ˆ η represents the joint error attributed to modeling error as well asobservational noise.One can enrich this additive error formulation by adding a term capturing the discrepancy between themodel prediction and truth ( i.e. model error) separately. A structure for the model error is more difficult toprescribe than that for the measurement error. Given that this additional term is not physically associatedwith the presumed sources of non-measurement (material and other physical) variability, its applicabilityoutside the training regime is tenuous. Furthermore, the additive combination of the two error terms canlead to challenges in disambiguation in a parameter estimation context. Lastly, this additive term canyield difficulties because it can lead to violations of physical laws and constraints [Salloum and Templeton[2014a,b]]. In order to avoid the difficulties associated with an additive error formulation, we embed the model errorin key parameters, essentially converting the unknown parameters into random variables that introducevariability in model predictions due to their uncertain nature. This approach, as detailed in [Sargsyan et al.[2015b]], represents selected parameters using PCE. In our context, we will assume a uniform distributionfor each parameter in Eq. (12), given by the first-order Legendre-Uniform PCE: θ i = α i, + α i, ξ i , (15)in which α i, represents the mean term and α i, dictates the level of variability in θ i which contributes tothe total model error. We will also consider the model selection problem as to whether or not to embed theerror in θ i , which amounts to keeping the α i, term or setting it explicitly to zero. These coefficients (oneor two per parameter) need to be inferred from the experimental observations. Such embedding of error,along with a classical additive error, may account for both model and measurement errors, respectively. Anoticeable advantage of this approach, when compared to employing additive error alone, is that the modelerror is embedded in the model parameters and hence we can propagate the calibrated model error throughnumerical simulations to any output of interest. (Note that this use of a PCE is distinct from its use in thesurrogate modeling, where the extent θ was selected to cover the feasible range of the parameters and notinferred, as it is in this context.)The problem of calibrating such an embedded error model is equivalent to that of estimating the prob-ability density of the parameters in which the error is embedded. Specifically, our objective is to estimate α = { α i, , α i, , . . . } that parametrize the density of θ . This is in contrast to the conventional use ofBayesian inference for parameter estimation, i.e. additive error formulations, in which one infers the pa-rameter and not its density. Also, the data for our present calibration problem motivates the embeddedapproach since it suggests the uncertainties are aleatory/irreducible rather than epistemic/reducible.In this context, the model calibration problem thus involves finding the posterior distribution on α viaBayes’ theorem Eq. (9) p ( α |D , M ) = p ( D| α , M ) p ( α | M ) p ( D| M ) , (16)where α has been substituted for θ , p ( α |D , M ) denotes the posterior PDF, p ( D| α , M ) is the likelihoodPDF, and p ( α | M ) is the prior PDF. Note that α reduces to the classical parameter vector θ when no errorembedding is performed ( θ i = α i, and α i,j = 0 for j > ). Among the different options detailed in[Sargsyan et al. [2015b]] for the likelihood construction, we employ the marginalized likelihood, which or the i -th batch D i , can be written as p ( D i | α , M ) = 1(2 π ) Ninε n ε − (cid:89) j =0 N i (cid:89) k =1 ς j ( α ) exp (cid:34) − ( µ j ( α ) − σ ( i,k ) j ) ς j ( α ) (cid:35) , (17)where µ j ( α ) = E ξ [ M ( ε j ; θ ( α , ξ ))] (18)and ς j ( α ) = V ξ [ M ( ε j ; θ ( α , ξ ))] + ς (19)are the mean and variance of the model predicted stress at fixed α and strain point ε j . These momentsare computed using the quadrature techniques that are commonly relied upon in uncertainty propagation,as in the input characterization, Eq. (15); see [Sargsyan et al. [2015b]] for detailed description of themethodology involved in likelihood evaluation. The expansion in Eq. (15) resulting from the embedded model discrepancy approach is general in that itdoes not impose limitations on which parameters should embed the error. Furthermore, as mentioned inSec. 4.1, we can model the stress using 2, 3, or 5 physical parameters. We have also chosen to examinethe inclusion of a cross-section correction factor, A , which could also be chosen as another parameter inwhich to embed an error. Therefore, there are three physical models that are competing to fit the data,with up to 6 physical parameters, all of which are competing to be bestowed with an embedded error term.That amounts to a total of 88 plausible models that are competing to fit the given data. Since determiningthe optimal model and optimal parameters for model error embedding a priori is not possible, we performmodel selection using Bayes factors [Berger and Pericchi [1996], Verdinelli and Wasserman [1996]]. Thismethod of model selection is a data-based approach that selects the optimal model as the one that strikesthe right balance between data-fit and model simplicity [Beck [2010], Kass and Raftery [1995]]. Thisbalance is monitored using the so-called model evidence, or marginal likelihood, for each model M , givenby p ( D| M ) = (cid:90) p ( D| α , M ) p ( α | M ) d α . (20)The computation of the model evidence p ( D| M ) is typically neglected in Bayesian calibration as itacts merely as a normalizing factor in Bayes’ rule, Eq. (9). The model evidence acts as a quantitativeOckham’s razor that, when maximized, performs an explicit trade-off between the data-fit and the modelsimplicity [Beck [2010]]. This is elucidated when examining the logarithm of the model evidence [Mutoand Beck [2008], Sandhu et al. [2017]]: ln p ( D| M ) = E [ln p ( D| α , M )] − E (cid:20) ln p ( α |D , M ) p ( α | M ) (cid:21) (21)where the expectation E [ · ] is with respect to the posterior PDF p ( α |D , M ) . The first term on the righthand side of Eq. (21) quantifies the data-fit and is known as goodness-of-fit . The second term is equal tothe relative entropy (or Kullback-Leibler divergence) between the prior and posterior PDFs, also knownas the information gain [Konishi and Kitagawa [2007]]. Given sufficient data D , the information gain isnormally higher for more complex models (models with more parameters or with parameters of greaterprior PDF support reflecting poor prior knowledge as to their values). Hence, the information gain termquantifies model complexity by examining the relative difference between prior and posterior PDFs. This s in contrast to the frequentist approach where model complexity, in general, only depends on the numberof model parameters rather than their relative prior supports [Konishi and Kitagawa [2007]].Model selection using Bayes factors involves the direct comparison of two plausible models at a time,with Bayes factor defined as the ratio of their evidences given by B ( M i , M j ) = p ( D| M i ) p ( D| M j ) = (cid:82) p ( D| α , M i ) p ( α | M i ) d α (cid:82) p ( D| α , M j ) p ( α | M j ) d α (22)with B ( M i , M j ) > indicating that model M i is more likely while B ( M i , M j ) < indicates that M j is the more likely model. One can supplement the evidence with prior belief on these models (called priormodel probability) and in that case we would be interested in the ratios of posterior model probabilities.In our context, we assume that all models have equal prior probabilities ahead of analyzing the data andthus Bayes factor is a suitable way to compare the plausible models.The key challenge is to compute the model evidence efficiently and accurately. In this investigation,we will modify a technique known as adaptive Gauss-Hermite quadrature [Naylor and Smith [1982], Liuand Pierce [1994], Hakim et al. [2018], Khalil and Najm [2018]] by employing importance sampling forintegration rather than Gauss quadrature. In an importance sampling framework, we utilize a multivariatenormal distribution q as a proposal/sampling distribution, with a mean vector µ and covariance matrix Σ equal to the posterior mean and posterior covariance of α , i.e. q ( α ) = 1 (cid:112) (2 π ) d α | Σ | exp (cid:18) −
12 ( α − µ ) (cid:124) Σ − ( α − µ ) (cid:19) . (23)The model evidence η , being an integral over the parameter space, can now be estimated using impor-tance sampling with q as the proposal distribution. To do this, we reformulate the evidence as η = (cid:90) p ( D| α , M ) p ( α | M ) d α = (cid:90) (cid:82) p ( D| α , M ) p ( α | M ) q ( α ) q ( α )d α , (24)and utilize the change of variable α = L (cid:101) α + µ , with L being the lower triangular matrix in the Choleskydecomposition of the covariance matrix, Σ = LL (cid:124) , to obtain the following formulation for the evidence: η = (cid:90) (cid:82) p ( D| L (cid:101) α + µ , M ) p ( L (cid:101) α + µ | M )exp (cid:0) − (cid:101) α (cid:124) (cid:101) α (cid:1) exp (cid:18) − (cid:101) α (cid:124) (cid:101) α (cid:19) | L | d (cid:101) α . (25) η can now be approximated using Monte Carlo sampling strategy as η ≈ N LH N LH (cid:88) k =1 | L | (cid:82) p ( D| L (cid:102) α k + µ , M ) p ( L (cid:102) α k + µ | M )exp (cid:0) − (cid:102) α k (cid:124) (cid:102) α k (cid:1) , (26)where ( (cid:102) α k ) ≤ k ≤ N LH are Latin Hypercube samples [Le Maˆıtre and Knio [2010]] drawn from a stan-dard multi-variate normal distribution Gauss-Hermite quadrature points where N LH . For the estimator inEq. (26) to be accurate, we need good estimates for the posterior mean vector and covariance vector, µ and ), which can be obtained adaptively using the following iterations ( n denoting the iteration number): µ n = 1 N LH N LH (cid:88) k =1 | L n − | p ( D| L (cid:102) α k + µ n − , M ) p ( L n − (cid:102) α k + µ n − | M )exp (cid:0) − (cid:102) α k (cid:124) (cid:102) α k (cid:1) × ( L n − (cid:102) α k + µ n − ) (27) Σ n = 1 N LH N LH (cid:88) k =1 | L n − | p ( D| L (cid:102) α k + µ n − , M ) p ( L n − (cid:102) α k + µ n − | M )exp (cid:0) − (cid:102) α k (cid:124) (cid:102) α k (cid:1) × ( L n − (cid:102) α k )( L n − (cid:102) α k ) (cid:124) (28) η n = 1 N LH N LH (cid:88) k =1 | L n | p ( D| L (cid:102) α k + µ n , M ) p ( L n (cid:102) α k + µ n | M )exp (cid:0) − (cid:102) α k (cid:124) (cid:102) α k (cid:1) (29)For each individual model, these iterations start with an initial guess for the posterior parameter meanvector and covariance matrix. We obtain those from an initial run of a Markov chain Monte Carlo samplerover the parameter posterior. The iterations stop with a termination criterion on η . For this application, thetermination criteria is met when the relative change in η is less than − . In this section, we present results relating to the construction of surrogate models from the full finite ele-ment plasticity model, their calibration to the experimental data, and the selection of the optimal predictivemodel using Bayesian model selection. Most of the results are obtained using the UQ Toolkit [Debusschereet al. [2017]].
To describe the material stress-strain behavior we analyze, calibrate and compare three nested plasticitymodels of increasingly complex phenomenology, namely perfect plasticity, linear hardening and saturationhardening. As mentioned in Sec.3, we focus on five parameters: Young’s modulus, E ; yield strength, Y ;hardening modulus, H ; saturation modulus, K ; and saturation exponent, B , which control the elastic-plastic stress response. We construct surrogate model for this five parameter model, as outlined in Sec.4.2.In order to construct Legendre-Uniform PCE surrogates for the model outputs (engineering stresses at acollection of engineering strain values), we assumed that the material parameters are uniformly distributedand characterized by the following first-order PCE: E = 200 + 80 ξ [GPa] ,Y = 1 . . ξ [GPa] ,H = 3 . . ξ [GPa] , (30) K = 0 . . ξ [GPa] ,B =1750 +1750 ξ where { ξ , ξ , ξ , ξ , ξ } ∼ U ( − , are independent identically distributed (i.i.d.) standard uniformrandom variables. The geometric parameter A will be discussed in the next section.We chose these parameters ranges to be large enough that the corresponding predictions can capture thevariability of the experimental data shown in Fig. 1b. The ranges are also chosen so that the constructedsurrogate can be used to emulate the two and three parameter model by setting the relevant parameters .0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0 Engineering Strain (%) E n g i n ee r i n g S t r e ss ( G P a ) Figure 2: Stress-strain curves samples used to build the surrogate for the five parameter model. to zero. Also, we remark that the expansion with i.i.d. random variables is a common step to build thesurrogate model. Any correlations between the parameters will then be discovered through the inverseproblem, see [Marzouk et al. [2007a], Rizzi et al. [2012, 2013], Khalil et al. [2015], Safta et al. [2017]] forexample. To construct and validate this surrogate, we collected training and validation samples.Fig. 2 shows the stress-strain curves obtained from the corresponding plasticity simulations over the5032 points in parameter space. Fig. 3 shows the normalized (relative) root-mean-square error for thevarious PCE surrogates with increasing order as a function of the strain. In this case, a PCE surrogate oforder 6 or greater exhibits the lowest errors. Since the errors do not drop significantly for PCE surrogates oforder beyond 6, we are likely to be over-fitting the full simulation data when using regression in fitting thecoefficients of higher order PCE models. For the 2 (perfect plasticity) and 3 (linear hardening) parametermodels, a piece-wise surrogate was superior to a global representation due to the lower smoothness ofresponse relative to the 5-parameter model. The piece-wise surrogate was constructed using plastic strainas a classifier to switch between surrogates for the elastic and plastic regimes.
For illustrative purposes, we will use the third batch of available data as described in Sec.2. We will reservethe joint analysis of all available batches for possible future investigations due to the observed large andabrupt changes in stress-strain behaviors across batches (as opposed to a more smooth variability withinbatches) as illustrated in Fig. 1c. As was discussed in Sec. 4.4, the stress can be modeled using 2, 3, or5 physical parameters, resulting in three physical models that are competing to fit the data. Furthermore,depending on the number of model being analyzed, there are up to 6 parameters (including the cross-section correction factor) to be bestowed with an embedded error term. That amounts to a total of 88plausible models that are competing to fit the given data.We apply the methodology outlined in Sec.4.4 to compete the model evidence of each of these models.The competing models are labeled according to (a) the inclusion/exclusion of physical parameters and (b)whether or not an error is embedded in those physical parameters. For clarity, models will be distinguishedby subscript labels. The appearance of the parameter in the label subscript indicates its inclusion in themodel, while the appearance of the parameter with a δ subscript denotes the inclusion of the parameter, aswell as an embedded error as in Eq. (15). For example, model M E δ Y δ HKBA is a 5-parameter plasticity .0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0 PCE, 1st orderPCE, 2nd orderPCE, 3rd orderPCE, 4th orderPCE, 5th orderPCE, 6th orderPCE, 7th orderPCE, 8th orderPCE, 9th order NR M SE i n E ng i nee r i ng S t r e ss Engineering Strain (%)
Figure 3: Normalized root-mean-square error in the 5-parameter PCE surrogate for the engineering stress asa function of engineering strain. Results are shown for PCE surrogates of orders up to nine. model with errors embedded in Young’s modulus and yield strength whereas M EY A δ is a 2-parameterplasticity model with an error embedded in the cross-section correction factor. For all models, we assumethe measurement noise intensity (or standard deviation), ς , to be an unknown constant along the strain axis, i.e. the measurement error does not depend on the observed strain nor stress values. We therefore infer ς along with the other parameters. To enforce positivity of ς in the inference step, we chose to infer thenatural logarithm of ς with a uniform prior on [-5,0] in log( ς ). For the material property parameters, wechoose uniform priors with ranges coinciding with those chosen to build the surrogate model in Eq. (5.1).As for the cross-sectional area correction factor, A , we utilized a uniform prior with support on [0,2].Having computed the model evidence for all 88 plausible models, it was observed that the modelevidence was maximized for model M EY δ HKBA δ and thus we compute Bayes factor with respect to thismodel as reported in Table 1. We can see that model M EY δ HKBA δ is the most likely model (based onmodel evidence), with the second-most likely model, M EY δ HKB δ A δ , being 40% less likely.The following observations are noteworthy: • The 5 parameter model is the most likely physical model, in comparison to the 3 and 2 parametermodels (as described in Sec.3). • The variability in the mechanical responses as observed from experiments can be effectively andcompactly modeled via an embedded error in the yield strength Y and the cross-sectional area cor-rection factor A . Physically, this result is plausible given the dimensional variations and the fact thatthe AM process is likely to affect plastic response and unlikely to alter the effective elastic modulus. • The data provides relatively weak evidence to suggest that the variability can be modeled by embed-ding the errors in the hardening modulus, H , saturation exponent B , or Young’s modulus, E . Thistoo is plausible given the small variations in hardening and the trend in near-yield behavior, refer toFig. 1c. The previous analysis has resulted in an optimal model M EY δ HKBA δ that strikes a balance between data-misfit and model simplicity (as indexed by the number of parameters and which are allowed to be distri- M i Bayes Factor with respect to M EY δ HKBA δ M EY δ HKBA δ M EY δ HKB δ A δ M EY δ H δ KBA δ M E δ Y δ HKB δ A δ M EY δ HK δ B δ A δ M E δ Y δ HKBA δ < . Table 1: Bayes factor B ( M i , ... ) as defined in Eq. (22) with respect to M EY δ HKBA δ butions). As part of the previous analysis, we sampled from the posterior parameter PDF using adaptiveMetropolis-Hastings Markov chain Monte Carlo sampling technique [Gamerman and Lopes [2006], Bergand Billoire [2008]]. Figure 4 shows the 1D and 2D marginal posterior parameter PDFs, in which δ Y and δ A denote the PCE coefficients that account for the embedded errors in the respective parameters. Notethe unimodality of the PDFs, implying the existence of a global maximum-a-posteriori (MAP) estimatefor the parameters. There are additional insights that can be made based on further examination of the(marginal) PDFs in Fig. 4: • The cross-sectional area correction factor A has a mean value of 0.76 and varies with most proba-bility between 0.7 and 0.8. This indicates that the post-processing that was carried out in convertingthe applied loads to engineering stresses relied upon over-estimates of the sample cross-sectionalareas, as expected given that the effective load bearing area is at most the area given by the outerdimensions of the gauge section. • There is strong negative correlation between Young’s modulus, E , and the cross-sectional area cor-rection factor, A . This is to be expected given the basic relationship between force, area and stressand, also, the fact that the stress at any strain value is bounded within the envelope of variability. • There is strong negative correlation between the intensity of the embedded errors in yield strength, δ Y , and the cross-sectional area correction factor, δ A . This is logical since the data suggests a fixedlevel of variability in the engineering stress at any strain value and both of these terms combine incontributing to that total variability. • Examining the marginal PDF of the intensity of the additive error, ς , we observe a relatively insignif-icant MAP estimate close to 0.02 GPa. This indicates that the embedded error terms are effectivein capturing the variability in the mechanical response, with an embedded error in the yield strengthhaving a MAP estimate close to 115 GPa (from the marginal posterior PDF of δ Y ).We next examine the differences in the inferred posterior parameter PDFs between the additive andembedded error models. To reiterate, the key difference between the two approaches are the embeddederrors which explicitly allow the calibrated models to reflect the inherent variability in the mechanicalresponses using parametric variability. In contrast, calibration with solely an additive error attributes vari-ability in the observed quantities (i.e. engineering stress) to an additive error term which is not connecteddirectly to the variability of any particular parameter. We focus on the two parameters that are selectedto carry an embedded error in the optimal model according to the model selection study (see Sec. 5.2).Those parameters are the yield strength, Y , and cross-sectional area correction factor, A . Fig. 5 showsthe posterior PDFs for the two parameters obtained using the additive and embedded error approach. Wehighlight the following key observations: • It is obvious that the embedded approach attributes a significant amount of variability in yieldstrength Y based on the data while the additive approach results in smaller uncertainty. In effect, the D F -5 Y Y H -3 K B -3 A A & E Y Y H K B A A & Figure 4: 1D and 2D marginals of the posterior parameter PDF for model M EY δ HKBA δ . distribution in the additive approach converges to a delta distribution at the best average value for Y ,since the parameter is treated as an unknown constant rather than a random variable. The width ofthe distribution in the additive approach characterizes the lack of data (not the inherent material vari-ability), while the embedded approach converges to a (non-degenerate) distribution characterizingthe range of Y that help explain the observed variability. • Likewise, the difference in the two approaches is readily apparent in the distributions of the cross-sectional area correction factor A shown in Fig. 5. Although the mean estimate for A is more orless the same using the two approaches, the uncertainty (reflected in the support of the PDFs) islarger for the embedded error approach and is interpretable as the variation of a physical parameterresponsible for the inconsistent mechanical response. In this section, we will examine the predictions obtained using the optimal model (see previous modelselection sections) with embedded model error in yield strength, δ Y , and the cross-sectional area correctionfactor, δ A . We also examine the predictions obtained using the 5-parameter model with a purely classicaladditive model error, M EY HKBA . We obtain 150 posterior predictive realizations of the stress-strain Y (GPa) pd f Additive errorEmbedded error A pd f Additive errorEmbedded error (a) (b)Figure 5: Comparison of posterior PDFs obtained using the additive (red, solid curves) and embedded (blue,dashed curves) error models for the parameters (a) Y yield strength and (b) area correction factor A . curves from both models as shown in Fig. 6 reflecting the uncertainty in the calibrated parameters and theembedded error terms (for model M EY δ HKBA δ ). The top row shows the results obtained by sampling thejoint posterior parameter PDF of the parameters α , and pushing these samples through the model only;while the middle row shows the results with the contribution of the additive error term. These resultshighlight the differences between the additive and embedded error approach in characterizing materialvariability effects on mechanical responses: • It is evident from comparing Fig. 6a,c (additive error) and Fig. 6b,d (embedded error) to the datashown in Fig. 6e that the embedded error approach yields a suitable representation where the vari-ability of the response is determined by the variability of the material parameters (with the aid ofembedded parametric distributions). • The additive approach yields a tight envelope of predictions, and the full variability is only capturedby the additive error term, which results in over-estimation of the predictive uncertainty. • Comparing individual realizations to the curves obtained experimentally, it is clear that the embed-ded approach (with and without additive error contribution) yields curves that match quite well thetrends observed in the experiment by appropriately characterizing material variability and measure-ment noise, while the classical method deviates considerably and is dominated by the high frequency,uncorrelated white noise.
We have presented a method which can model the variability of a material which is well-described byexisting plasticity models of mean response, but contains microstructural variability leading to differentmacroscopic material properties. By leveraging the embedded error method of Sargsyan et al. [Sargsyanet al. [2015a]] (which, as mentioned, was originally developed to address model discrepancy), we canmathematically represent the variability in mechanical response as material and geometric parametersdrawn from a well-calibrated joint distribution. This Bayesian approach was shown to be consistent withour understanding of microstructural variability and allows physical interpretation of the sources of vari-ability through the parameters. These interpretations can be linked to processing through inference and .5 1 1.5 2 2.5 3 Engineering Strain (%) E ng i nee r i ng S t r e ss ( G P a ) Engineering Strain (%) E ng i nee r i ng S t r e ss ( G P a ) (a) (b) Engineering Strain (%) E ng i nee r i ng S t r e ss ( G P a ) Engineering Strain (%) E ng i nee r i ng S t r e ss ( G P a ) (c) (d) Engineering Strain (%) E ng i nee r i ng S t r e ss ( G P a ) (e)Figure 6: Sample posterior predictive realizations obtained using the 5 parameter model for the additive (a,c)and embedded (b,d) approach. The top row shows the results obtained by sampling the joint parameter pos-terior density and propagating these samples through the model only, while the middle row shows the resultswith the contribution of the additive error. The experimental observations are shown in (e) for comparison.17 ntuition. The Bayesian methodology is appropriate for UQ studies requiring forward propagation of phys-ical variability and also enables the utilization of Bayesian model selection to arrive at an optimal modelthat strikes a balance between data-misfit and model complexity. The constitutive model we developedis amenable to non-intrusive sampling (and adaptable to direct evaluation in simulation codes that han-dle fields of distributions), and thus enables a robust design methodology that can predict performancemargins with high confidence.Another important contribution of this work is the illustration of the contrast the proposed approachwith commonly used uncertainty formulations. The standard, additive error formulation appropriately ac-counts for the uncertainty in the experiments arising from measurement error. Yet, in the case of inherentvariability, it characterizes all the uncertainty as measurement error which results in unwarranted confi-dence in the material properties and an inability to correctly and physically understand how that variabilitywould manifest in applications. We have demonstrated that the embedded method accurately characterizesthe aleatory uncertainty present in the experimental observations and enables engineering UQ analysisof material and geometric sources of uncertainty. It gives insight into what aspects of a homogeneous,macro-scale constitutive model are most strongly affected by microstructural variability and enables quan-titative model selection. Moreover, it is capable of representing both significant external noise and inherentvariability in a unified formulation.In future work, we will extend the methodology to the post-necking failure behavior of similar materi-als. Acknowledgments
This work was supported by the LDRD program at Sandia National Laboratories, and its support is grate-fully acknowledged. B.L. Boyce would like to acknowledge the support of the Center for IntegratedNanotechnologies. Sandia National Laboratories is a multimission laboratory managed and operated byNational Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honey-well International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administrationunder contract DE-NA0003525.
References
C. C. Battaile, J. M. Emery, L. N. Brewer, and B. L. Boyce. Crystal plasticity simulations ofmicrostructure-induced uncertainty in strain concentration near voids in brass.
Philosophical Maga-zine , 95(10):1069–1079, 2015.J. Beck. Bayesian system identification based on probability logic.
Structural Control andHealth Monitoring , 17(7):825–847, Nov. 2010. ISSN 15452255. doi: 10.1002/stc.424. URL http://doi.wiley.com/10.1002/stc.424 .B. A. Berg and A. Billoire.
Markov chain Monte Carlo simulations . Wiley Online Library, 2008.J. Berger and L. Pericchi. The intrinsic bayes factor for model selection and prediction.
Journal of theAmerican Statistical Association , 91:109–122, 1996.B. L. Boyce, B. C. Salzbrenner, J. M. Rodelas, L. P. Swiler, J. D. Madison, B. H. Jared, and Y.-L. Shen.Extreme-value statistics reveal rare failure-critical defects in additive manufacturing.
Advanced Engi-neering Materials . Dingreville, C. C. Battaile, L. N. Brewer, E. A. Holm, and B. L. Boyce. The effect of microstructuralrepresentation on simulations of microplastic ratcheting. International Journal of Plasticity , 26(5):617–633, 2010.J. M. Emery, R. V. Field, J. W. Foulk, K. N. Karlson, and M. D. Grigoriu. Predicting laser weld reliabilitywith stochastic reduced-order models.
International Journal for Numerical Methods in Engineering ,103(12):914–936, 2015.R. Field, M. Grigoriu, and J. Emery. On the efficacy of stochastic collocation, stochastic Galerkin, andstochastic reduced order models for solving stochastic problems.
Probabilistic Engineering Mechanics ,41:60–72, 2015.W. E. Frazier. Metal additive manufacturing: a review.
Journal of Materials Engineering and Performance ,23(6):1917–1928, 2014.D. Gamerman and H. F. Lopes.
Markov chain Monte Carlo: stochastic simulation for Bayesian inference .CRC Press, 2006.R. Ghanem and P. Spanos.
Stochastic Finite Elements: A Spectral Approach . Springer Verlag, New York,1991.L. Hakim, G. Lacaze, M. Khalil, K. Sargsyan, H. Najm, and J. Oefelein. Probabilistic parameter estimationin a 2-step chemical kinetics model for n-dodecane jet autoignition.
Combustion Theory and Modelling ,22(3):446–466, 2018.R. Hill. Elastic properties of reinforced solids: some theoretical principles.
Journal of the Mechanics andPhysics of Solids , 11(5):357–372, 1963.R. E. Kass and A. E. Raftery. Bayes Factors.
Journal of the American Statistical Association , 90(430):pp.773–795, 1995. ISSN 01621459. URL .M. C. Kennedy and A. O’Hagan. Bayesian calibration of computer models.
Journal of the Royal StatisticalSociety: Series B (Statistical Methodology) , 63(3):425–464, 2001.M. Khalil and H. N. Najm. Probabilistic inference of reaction rate parameters from summary statistics.
Combustion Theory and Modelling , 22(4):635–665, 2018.M. Khalil, G. Lacaze, J. C. Oefelein, and H. N. Najm. Uncertainty quantification in LES of a turbulentbluff-body stabilized flame.
Proceedings of the Combustion Institute , 35(2):1147–1156, 2015.S. Konishi and G. Kitagawa.
Information Criteria and Statistical Modeling (Springer Series in Statistics) .Springer, oct 2007.O. Le Maˆıtre and O. Knio.
Spectral Methods for Uncertainty Quantification . Springer, New York, NY,2010.O. Le Maˆıtre and O. M. Knio.
Spectral methods for uncertainty quantification: with applications tocomputational fluid dynamics . Springer Science & Business Media, 2010.Q. Liu and D. A. Pierce. A note on gauss-hermite quadrature.
Biometrika , 81(3):624–629, 1994.K. K. Mandadapu, A. Sengupta, and P. Papadopoulos. A homogenization method for thermomechanicalcontinua using extensive physical quantities.
Proc. R. Soc. A , 468(2142):1696–1715, 2012.Y. M. Marzouk, H. N. Najm, and L. A. Rahn. Stochastic spectral methods for efficient bayesian solutionof inverse problems.
Journal of Computational Physics , 224(2):560 – 586, 2007a. . M. Marzouk, H. N. Najm, and L. A. Rahn. Stochastic spectral methods for efficient Bayesian solutionof inverse problems. Journal of Computational Physics , 224(2):560–586, 2007b.D. McDowell, S. Ghosh, and S. Kalidindi. Representation and computational structure-property relationsof random media.
JOM Journal of the Minerals, Metals and Materials Society , 63(3):45–51, 2011.M. Muto and J. Beck. Bayesian Updating and Model Class Selection for Hysteretic Structural ModelsUsing Stochastic Simulation.
Journal of Vibration and Control , 14(1-2):7–34, 2008.J. C. Naylor and A. F. Smith. Applications of a method for the efficient computation of posterior distribu-tions.
Applied Statistics , pages 214–225, 1982.S. Nemat-Nasser. Averaging theorems in finite deformation plasticity.
Mechanics of Materials , 31(8):493–523, 1999.F. Rizzi, O. Knio, H. Najm, B. Debusschere, K. Sargsyan, M. Salloum, and H. Adalsteinsson. UncertaintyQuantification in MD Simulations. Part II: Inference of force-field parameters.
SIAM J. MultiscaleModel. Simul. , 10(4):1460–1492, 2012.F. Rizzi, R. E. Jones, B. J. Debusschere, and O. M. Knio. Uncertainty quantification in md simulationsof concentration driven ionic flow through a silica nanopore. ii. uncertain potential parameters.
TheJournal of Chemical Physics , 138(19):194105, 2013.C. Safta, M. Blaylock, J. Templeton, S. Domino, K. Sargsyan, and H. Najm. Uncertainty quantification inLES of channel flow.
International Journal for Numerical Methods in Fluids , 83(4):376–401, 2017.A. G. Salinger, R. A. Bartlett, A. M. Bradley, Q. Chen, I. P. Demeshko, X. Gao, G. A. Hansen, A. Mota,R. P. Muller, E. Nielsen, et al. Albany: Using component-based design to develop a flexible, genericmultiphysics analysis code.
International Journal for Multiscale Computational Engineering , 14(4),2016.M. Salloum and J. A. Templeton. Inference and uncertainty propagation of atomistically-informed con-tinuum constitutive laws, part 1: Bayesian inference of fixed model forms.
International Journal forUncertainty Quantification , 4(2), 2014a.M. Salloum and J. A. Templeton. Inference and uncertainty propagation of atomistically informed con-tinuum constitutive laws, part 2: Generalized continuum models based on gaussian processes.
Interna-tional Journal for Uncertainty Quantification , 4(2), 2014b.R. Sandhu, C. Pettit, M. Khalil, D. Poirel, and A. Sarkar. Bayesian model selection using automaticrelevance determination for nonlinear dynamical systems.
Computer Methods in Applied Mechanicsand Engineering , 320:237–260, 2017.K. Sargsyan, H. Najm, and R. Ghanem. On the statistical calibration of physical models.
InternationalJournal of Chemical Kinetics , 47(4):246–276, 2015a.K. Sargsyan, H. N. Najm, and R. Ghanem. On the Statistical Calibration of Physical Models.
InternationalJournal for Chemical Kinetics , 47(4):246–276, 2015b.J. Simo and T. Hughes.
Computational Inelasticity . Springer New York, New York, NY, 1998.J. C. Simo. A framework for finite strain elastoplasticity based on maximum plastic dissipation and themultiplicative decomposition: Part i. continuum formulation.
Computer methods in applied mechanicsand engineering , 66(2):199–219, 1988. . Sivia. Data Analysis: A Bayesian Tutorial . Oxford Science, 1996.R. C. Smith.
Uncertainty quantification: theory, implementation, and applications , volume 12. Siam,2013.I. Verdinelli and L. Wasserman. Bayes factors, nuisance parameters, and imprecise tests. In J. M. Bernardo,J. O. Berger, A. P. Dawid, and A. F. M. Smith, editors,
Bayesian Statistics 5 , pages 765–771. OxfordUniversity Press, London, 1996.N. Wiener. The Homogeneous Chaos.
Am. J. Math. , 60:897–936, 1938.D. Xiu.
Numerical methods for stochastic computations: a spectral method approach . Princeton universitypress, 2010.D. Xiu and G. Karniadakis. The Wiener-Askey polynomial chaos for stochastic differential equations.
SIAM Journal on Scientific Computing , 24(2):619–644, 2002., 24(2):619–644, 2002.