A surrogate-based optimal likelihood function for the Bayesian calibration of catalytic recombination in atmospheric entry protection materials
Anabel del Val, Olivier P. Le Maître, Olivier Chazot, Thierry E. Magin, Pietro M. Congedo
AA surrogate-based optimal likelihood function for the Bayesian calibration ofcatalytic recombination in atmospheric entry protection materials
Anabel del Val a,b, ∗ , Olivier Le Maˆıtre c , Thierry E. Magin a , Olivier Chazot a , Pietro M. Congedo b a von Karman Institute for Fluid Dynamics, Chauss´ee de Waterloo 72, 1640 Rhode-St-Gen`ese, Belgium b INRIA, Centre de Math´ematiques Appliqu´ees, ´Ecole Polytechnique, IPP, Route de Saclay, 91128 Palaiseau Cedex, France c CNRS, INRIA, Centre de Math´ematiques Appliqu´ees, ´Ecole Polytechnique, IPP, Route de Saclay, 91128 Palaiseau Cedex, France
Abstract
This work deals with the inference of catalytic recombination parameters from plasma wind tunnel experiments forreusable thermal protection materials. One of the critical factors a ff ecting the performance of such materials is thecontribution to the heat flux of the exothermic recombination reactions at the vehicle surface. The main objective ofthis work is to develop a dedicated Bayesian framework that allows us to compare uncertain measurements with modelpredictions which depend on the catalytic parameter values. Our framework accounts for uncertainties involved inthe model definition and incorporates all measured variables with their respective uncertainties. The physical modelused for the estimation consists of a 1D boundary layer solver along the stagnation line. The chemical productionterm included in the surface mass balance depends on the catalytic recombination e ffi ciency. As not all the di ff erentquantities needed to simulate a reacting boundary layer can be measured or known (such as the flow enthalpy at theinlet boundary), we propose an optimization procedure built on the construction of the likelihood function to determinetheir most likely values based on the available experimental data. This procedure avoids the need to introduce any apriori estimates on the nuisance quantities, namely, the boundary layer edge enthalpy, wall temperatures, static anddynamic pressures, which would entail the use of very wide priors. Furthermore, we substitute the optimal likelihoodof the experimental measurements with a surrogate model to make the inference procedure both faster and more robust.We show that the resulting Bayesian formulation yields meaningful and accurate posterior probability distributions ofthe catalytic parameters with a reduction of more than 20% of the standard deviation with respect to previous works.We also study the implications of an extension of the experimental procedure on the improvement of the quality ofthe inference. Keywords:
Uncertainty Quantification, Bayesian Inference, Plasma Flows, Catalysis, Thermal Protection Systems,Surrogate Model, Markov Chain Monte Carlo
1. Introduction
Space travel, since its beginnings in Low Earth Orbit (LEO) to the exploration of our Solar System, has led tocountless scientific advancements in what it is one of the most challenging undertakings of humankind. Venturinginto Space requires large amounts of kinetic and potential energy to reach orbital and interplanetary velocities. Allthis energy is dissipated when a space vehicle enters dense planetary atmospheres [1]. The bulk of this energy isexchanged during the entry phase by converting the kinetic energy of the vehicle into thermal energy in the surroundingatmosphere through the formation of a strong bow shock ahead of the vehicle [2]. The interaction between thedissociated gas and a reusable protection system is governed by the material behavior which acts as a catalyst forrecombination reactions of the atomic species in the surrounding gas mixture [3]. The determination of the catalyticproperties of thermal protection materials is a complex task subjected to experimental and model uncertainties, andthe design and performance of reusable atmospheric entry vehicles must account for these uncertain characterizations. ∗ Corresponding author: von Karman Institute for Fluid Dynamics, Chauss´ee de Waterloo 72, 1640 Rhode-St-Gen`ese, Belgium.
Email address: [email protected] (A. del Val) a r X i v : . [ phy s i c s . d a t a - a n ] O c t t is relatively common when dealing with complex physical phenomena to resort to simple, non-intrusive a prioriforward uncertainty propagation techniques [4, 5]. These techniques assume a priori probability distributions for themain model parameters. Sensitivity analyses are then performed to discriminate the important ones. They also assumethat the exact value is su ffi ciently well known and within the considered uncertainty range. These methods do not useany experimental observation to calibrate such parameters. The interest of using experimental information is that itleads to objective uncertainty levels and provides likely values rather than a priori guesses, achieving better and morereliable predictions.In the present work, we explore the possibility of exploiting experimental data resulting from measurements in[6] for the purpose of inferring surface recombination e ffi ciencies. These parameters play an important role in theprediction of the thermal response of selected protection materials, such as ceramic matrix composites. The inferencefocuses on a Bayesian approach that has the advantage of providing a complete characterization of the parameters’uncertainty through their resulting posterior distribution. While conceptually simple, performing a Bayesian inferenceraises several computational and practical di ffi culties at every one of its constitutive steps [7, 8]. The main issue in ourproblem is related to the appearance of nuisance parameters within the model, such as pressures, wall temperaturesand the boundary layer edge enthalpy. These particular parameters are needed to perform the inference but we are notexplicitly interested in getting their distributions, nor we can measure all of them. Traditional Bayesian approachesdeal with this problem by prescribing prior distributions on such parameters at the expense of some of the observationsconsumed to evaluate these nuisance parameter posteriors. Consequently, it is important to remark their impact on thequality of the inference [9].Particularly, knowing how the experimental procedure is carried out is fundamental for the e ffi cient formulation ofthe inference method. Testing for the characterization of a given protection material requires the use of an additionalmaterial which assists in the testing (referred to as reference material) and whose catalytic properties are better known.The Thermal Protection System (TPS) material in question and the reference material are then tested under the sameexperimental conditions. The basis of this experimental approach lays in the fact that we can have accurate knowledgeabout the inlet boundary condition if we immerse a well-characterized material in it for which the response, that it isdirectly measured, is dependent on those inlet conditions. This knowledge is then used to assess the properties of theTPS material, main objective of the experiment.The inversion problem for the catalytic properties was first proposed in [10]. In it, Sanson et al. point out thatthe parameter for the reference material is not always perfectly known, and the testing conditions are subjected touncertainties. These conditions are consequently treated as nuisance parameters in the inference problem by prescrib-ing their priors. We highlight di ffi culties faced in this past work and propose a new methodology. Our formulationinvolves a particular treatment of the nuisance parameters, whose uncertainty is reduced by solving an auxiliary max-imum likelihood problem. This maximum likelihood problem alleviates the need to sample the nuisance parameters,and can then improve the computational e ffi ciency of the inference, providing more consistent and accurate poste-rior distributions. Solving this auxiliary problem and sampling the posterior distribution is expensive, as it requiresmultiple evaluations of the boundary layer equations. To mitigate this issue, we use a surrogate model of the optimallikelihood function, making the whole inference process faster and allowing for extensive exploration of the posteriordistribution. The use of this methodology, novel in the field of TPS characterization, leads to an improved exploitationof the experimental measurements with, as a result, a better estimation of the catalytic parameters for a wide rangeof conditions. Further, the developments proposed in this paper have the potential of quantitatively assessing severalways in which to improve the experimental procedure and achieve a better understanding of the catalytic phenomena.The article is organized as follows. Section 2 describes the experimental set-up together with the model-basedsimulations. In Section 3, the Bayesian framework is presented in detail. Section 4 looks into a case study withreal experimental data to assess the validity of the Bayesian approach and study the di ff erent surrogate models forthe log-likelihood approximation. Section 5 extends the methodology to other experimental cases to assess what theBayesian method can bring to the problem of testing catalytic materials. Finally, Section 6 summarizes the outcomesof the analyses and discusses the possible perspectives of the presented approach.
2. Experimental set-up and theoretical models
In this section, we briefly discuss the theoretical model and recall the experiments and measured quantities con-sidered in this work. As a simple model of TPS material, we define the catalytic coe ffi cient γ as the ratio of the2umber of atoms that recombine on the material surface over the total number of atoms that hit it. We assume thesame recombination probability for the nitrogen and oxygen species constituting the air plasma, leading to just onesingle catalytic parameter to characterize the material under atmospheric entry conditions. Model-based numericalsimulations include this parameter to account for catalytic e ff ects in the prediction of relevant quantities. The esti-mation of γ also requires performing experiments in conditions relevant and similar to the environment faced duringatmospheric entry so that this information can be fed to the model-based simulations to provide boundary conditionsand closure. In all this sophisticated machinery, experiments and models are intertwined in a complex fashion, and itis essential to carefully assess the e ff ect of possible uncertainties on the inferred quantities. We consider the experimental set-up of the Plasmatron facility at the von Karman Institute (VKI), a Inductively-Coupled Plasma (ICP) wind tunnel powered by a high-frequency, high-power, high-voltage (400 kHz, 1.2 MW, 2 kV)generator [11]. Figure 1 schematizes the Plasmatron and its instrumentation for catalytic property determination.The plasma flow is generated by the induction of electromagnetic currents within the testing gas in the plasmatorch (right part of the scheme in Fig. 1); this process creates a high-purity plasma flow which leaves the testingchamber through the exhaust (left side of the scheme). In a typical experiment, one sequentially exposes two probesto the plasma flow: a reference probe made of a well-known material (copper), having a catalytic coe ffi cient γ ref , anda test probe which holds a sample of the TPS material with the unknown catalytic coe ffi cient, γ TPS , to be inferred. Thefollowing instruments equip the Plasmatron. For pressures, a water-cooled Pitot probe measures the dynamic pressure P d within the plasma jet, and an absolute pressure transducer records the static pressure P s in the Plasmatron chamber.The reference probe is an hemispherical device (25 mm radius) equipped with a water-cooled copper calorimeter atthe center of its front face. The calorimeter has a cooling water system that maintains the surface temperature of thereference probe T refw . The heat flux q refw is deduced from the mass flow (controlled by a calibrated rotameter) circulatingin the cooling system and the inlet / outlet water temperature di ff erence measured by thermocouples as a result of theexposure to the plasma flow. For the test probe, we measure directly the heat flux q TPSw and surface temperature T TPSw .The determination of the heat flux assumes a radiative equilibrium at the surface, with the relation q TPSw = σ(cid:15) T TPSw 4 ,where σ is the Stefan-Boltzmann constant, (cid:15) is the emissivity measured with an infrared radiometer, and T TPSw is thewall temperature which is measured using a pyrometer. More details on how these measuring devices work can befound in [12].The underlying idea of the experimental procedure is to perform first measurements of the wall temperature T refw ,heat flux q refw and pressures P d and P s with the reference probe set in the plasma jet. As these measurements depend onthe state of the free stream flow, in particular on the enthalpy H δ at the boundary layer edge, the free stream conditionscan be deduced if one knows the catalytic coe ffi cient γ ref of the reference probe. Then, in a second stage, the testprobe is set in place of the reference probe in the plasma jet. The corresponding steady state wall temperature T TPSw and heat flux q TPSw are measured and, assuming that the free stream flow conditions have not changed, the catalyticcoe ffi cient γ TPS of the test probe can be inferred.
To identify the TPS catalytic properties γ TPS , we simulate the chemically reacting boundary layer in the vicinityof the probe stagnation point [2]. The Boundary Layer (BL) code solves the full Navier-Stokes equations along thestagnation line. To solve the system, we need closure models for the thermodynamic and transport properties as wellas the chemical production terms of the di ff erent species. Transport fluxes are derived from kinetic theory using theChapman-Enskog method for the solution of the Boltzmann equation [13, 14]. Di ff usion fluxes are computed throughthe generalized Stefan-Maxwell equations, an approach which by derivation is exactly equivalent to a descriptionbased on the multicomponent di ff usion matrix [15, 16, 17], therefore providing equivalent solutions for the di ff usionfluxes. For the homogeneous chemistry in the gas phase, the Law of Mass Action is used to compute production ratesas proportional to the product of the reactant densities raised to their stoichiometric coe ffi cients [18]. A 7-species airmixture and chemical rates from Dunn and Kang [19] in the form of modified Arrhenius laws are considered. Thethermodynamic properties, such as the enthalpy, are derived from statistical mechanics [2, 20] for a reacting mixtureof perfect gases, assuming thermal equilibrium and chemical non-equilibrium. Apart from the closure models, theparabolic nature of the BL equations requires the imposition of two boundary conditions: the external flow conditions3 igure 1: Schematic view of the experimental set-up for the Plasmatron facility. at the boundary layer edge, namely pressure, temperature and velocity, and the conditions at the material surface. Amass balance is imposed to account for the production and depletion of species as a consequence of their interactionswith the material surface. Recombination reactions can be triggered depending on the catalytic activity of suchmaterial [21]. We impose a no slip condition at the wall for the momentum equation and impose a wall temperaturefor the energy flux. More details about the derivation, coordinate transformations and numerical implementation ofthe BL code are available in the work of Barbante [22]. In summary, the predictive quantity of the code is the wallheat flux q w = q w (cid:32) γ, T w , P δ , H δ , δ, ∂ u δ ∂ x , v δ ∂∂ y (cid:32) ∂ u δ ∂ x (cid:33)(cid:33) , (1)which depends on the free stream conditions (subscript δ ), the thickness of the boundary layer δ , the catalytic param-eter of the material γ and the surface temperature T w . An auxiliary magnetohydrodynamic axisymmetric simulationassuming Local Thermodynamic Equilibrium (LTE) is performed to simulate the flow in the torch and chamber ofthe wind tunnel [23]. Relaying on the knowledge of the operating conditions of the Plasmatron, such as electricpower, injected mass flow, static pressure and probe geometry, this 2D simulation lets us compute non-dimensionalparameters that define the momentum influx to the boundary layer (interested reader is directed to [24]). To derive thecomputation of the velocity derivative ∂ u δ /∂ x needed as input in Eq. 1, we need also to incorporate the measurementof the dynamic pressure P d along with the non-dimensional parameters. The prediction we are seeking to match theexperimental data is now recast as q w = q w ( γ, T w , P δ , H δ , P d , Π , Π , Π ) , (2)where Π , Π , Π are the non-dimensional parameters and P δ is taken as the chamber static pressure P s . Theseparameters, together with the dynamic pressure P d , define the boundary layer thickness, velocity and derivatives atthe edge. Even though the number of input parameters is larger in Eq. 2, this formulation is more useful given that wehave replaced three unknown and not easily measurable quantities by a quantity that can be directly measured ( P d )and three parameters that do not depend on the local flow conditions and can be taken as known constants for eachcondition for the purpose of our Bayesian formulation [25].The typical procedure to retrieve the catalytic parameter γ is the following. In the first step, for given non-dimensional parameters Π , Π , Π , measured wall temperature T refw , static pressure P s , dynamic pressure P d andreference catalytic parameter γ ref , the enthalpy H δ that matches the heat flux observed q refw is iteratively determined.When the enthalpy H δ is determined, the code can be used, in a second step, to find the value of the catalytic parameter γ TPS that yields the observed heat flux q TPSw for the measured wall temperature T TPSw , pressures P s and P d and rebuiltenthalpy H δ obtained in the previous step. 4 . Bayesian framework In this section we derive the Bayesian formulation of the inference problem in subsection 3.1. We then discussin subsection 3.2 the construction of a surrogate model to approximate the likelihood function, and finally, brieflydescribe the procedure used to sample the posterior distribution in subsection 3.3.
The inference of the model parameters uses the Bayes formula which can be generically formulated as P ( Q | M ) = L ( M | Q ) P ( Q ) P M ( M ) . (3)In (3), we have denoted Q the vector of model parameters to be inferred, M the vector of measurements or observa-tions, P ( Q ) the prior distribution of the parameters, L ( M | Q ) the likelihood of the measurements, P ( Q | M ) the posteriordistribution of Q , and P M the evidence or marginal likelihood, that is, the probability that the measurements are ob-tained under the considered model. In our context, we have M = ( P meass , P measd , q ref , measw , q TPS , measw , T ref , measw , T TPS , measw ).Classically, the likelihood measures the discrepancies between the measurements in M and the corresponding modelpredictions. The issue here is that the model predictions are not just functions of the catalytic coe ffi cients γ = ( γ ref , γ TPS ), but also depends on all the inputs of the BL code: the pressures P s , P d , wall temperatures T refw , T TPSw andboundary layer edge enthalpy H δ . The pressures and wall temperatures are measured in the experiment, but onlywith limited precision, while the enthalpy H δ is simply not known. Consequently, there may be zero, or multiple,boundary layer edge conditions consistent with the measurements. Since the boundary layer edge conditions can notbe completely characterized, the remaining uncertainty should be accounted for when inferring the test probe catalyticcoe ffi cients.One possibility to handle this issue is to consider the whole set of uncertain quantities, not just the quantities ofinterest γ ref and γ TPS , but also the so-called nuisance parameters. In that case, we define the vector of model parameters Q = ( γ ref , γ TPS , T refw , T TPSw , P s , P d , H δ ) in the inference problem. The introduction of the nuisance parameters inducesseveral di ffi culties related to the necessity to specify their prior distributions, the increased dimensionality of theinference space, and the consumption of information for the inference of the nuisance parameters. This last issueis detrimental to learning the parameters of interest. In [10] non-informative priors were used for all the nuisanceparameters. This approach only approximates the posterior of Q including the nuisance parameters, and the influenceof the (unknown) prior densities of these parameters is unclear. Not only that but the ability to e ff ectively learn fromthese experimental data is lost. In the following, we derive an alternative formulation for the joint inference of the twocatalytic coe ffi cients γ = ( γ ref , γ TPS ). Specifically, we consider the following Bayes formula P ( γ | M ) = L ( M | γ ) P ( γ ) P ( M ) , (4)as before, L ( M | γ ) refers to the likelihood of the measurements in M . This formulation only depends on the twocatalytic coe ffi cients ( γ ref , γ TPS ) and not on the other nuisance parameters. As a result, only the prior P ( γ ) is needed. Our objective is to design a reduced likelihood function which does not involve any nuisance parameters. As statedbefore, the prediction of the heat fluxes involves not only the catalytic coe ffi cients γ ref , γ TPS , but also P s , P d , T refw , T TPSw and H δ . Assuming independent unbiased Gaussian measurement errors, with magnitude σ , the full likelihood of M with the nuisance parameters would read as L ( M | Q ) = exp − (cid:0) P meass − P s (cid:1) σ P s × exp − (cid:16) P measd − P d (cid:17) σ P d ×× (cid:89) i ∈{ ref , TPS } exp − (cid:16) q i , measw − q i w ( H δ , P s , P d , T i w , γ i ) (cid:17) σ q w − (cid:16) T i , measw − T i w (cid:17) σ T w . (5)5n this likelihood, the dependencies on γ ref and γ TPS are implicitly contained in the heat flux terms. To reduce thedependencies of the likelihood to just the parameter γ , we propose to set the nuisance parameters Q \ γ to the valuesthat maximize the likelihood (5). In the following, we denote H opt δ , P opts , P optd , T i , optw , the maximizers of (5). Notethat these optima are functions of the catalytic coe ffi cients. We shall also denote q i , optw ( γ ) the corresponding modelpredictions of the heat fluxes for each probe. With these optimal values for the nuisance parameters, we define theoptimal likelihood as L opt ( M | γ ) = exp − (cid:16) P meass − P opts ( γ ) (cid:17) σ P s exp − (cid:16) P measd − P optd ( γ ) (cid:17) σ P d ×× (cid:89) i ∈{ ref , TPS } exp − (cid:16) q i , measw − q i , optw ( γ ) (cid:17) σ q w − (cid:16) T i , measw − T i , optw ( γ ) (cid:17) σ T w , (6)where the dependence of the optimal values on the two material properties has been made explicit for clarity.Given M and a value for the couple of catalytic coe ffi cients, the optimal nuisance parameters and associated heatfluxes are determined using the BL code. The procedure for this optimization is the Nelder-Mead algorithm [26],which is a gradient-free method requiring only evaluations of the BL model solution. Typically, a few hundredsresolutions of the BL model are needed to converge to the optimum of (5). The computational cost of the optimizationprevents us from using directly this approach to draw samples of γ from their posterior distribution, and this factmotivates the approximation of the optimal (log) likelihood in (6) as discussed in Subsection 3.2. To complete the Bayesian formulation, we now discuss the selection of the prior for the catalytic coe ffi cients γ .We start by observing that, although it was assumed that the reference probe is well characterized when designing thetwo-probe experiment, the two coe ffi cients γ ref and γ TPS play a similar role in the expression of the likelihood in (6).In fact, the observations should contribute to learn about both material properties. In other words, the di ff erences inthe knowledge of γ should be reflected by their distinct priors and not in the design of the likelihood. Therefore, itis important to select priors that fairly account for the initial beliefs in the values of the catalytic coe ffi cients. In thiscase, we have to be cautious with our choice. Considering first the catalytic property of the reference calorimeter,previous works [27, 28, 29, 30, 31, 32, 33, 34, 35, 36] show that the a priori knowledge of γ ref is actually quite poor:values proposed in literature vary significantly from one experiment to another. Furthermore, γ ref has been reportedfor a limited number of conditions, leaving us with large prior uncertainties since in our experiment the boundarylayer edge conditions are unknown too. Similarly, the initial knowledge of γ TPS is poor. For instance, previous works( e.g. [6]) show that the value of γ TPS can span two orders of magnitude depending on the testing conditions. Toconclude, constructing a sharp prior distribution for γ TPS on the basis of previous works is di ffi cult, while assuming abetter knowledge of γ ref is not realistic. For all these reasons, we decided in this work to consider independent priorswith initial ranges spanning few orders of magnitude, stating bounds on plausible values:10 − ≤ γ ref , γ TPS ≤ . The lower and upper bounds were set to encompass values proposed in the literature and to ensure that they containthe values to be inferred. Based on the proposed bounds, the last step to derive the prior consists on specifying thedistribution withing the range. Here, instead of using an non-informative prior where any value is as likely as anyother ( i.e. , a uniform prior), we decided to go for log-uniform distributions,log ( γ TPS ) , log ( γ ref ) ∼ U ( − , , which are better suited when the priors range over several orders of magnitude.The theoretical models describing the chemically reacting boundary layer, together with the experimental dataavailable are integrated in the Bayesian framework for the inference of γ . In the next subsection, we describe how wereduce the computational complexity inherent to the sampling of the posterior. Specifically, we rely on a surrogatemodel for the log-likelihood function to alleviate most of the computational burden.6 .2. Surrogate model for the log-likelihood function The log-likelihood function must be evaluated multiple times when sampling the posterior distribution usingMCMC methods. Since an evaluation of the log-likelihood requires many resolutions of the reacting boundary layermodel to determine the optimum boundary layer edge conditions, direct sampling strategies based on the full modelwould be too costly. To overcome this issue, the logarithm of the likelihood function (6) is approximated by a surrogatemodel whose evaluations are computationally cheap.
To construct a surrogate model of the log-likelihood, we first introduce new canonical random variables, ξ = ( ξ , ξ ), for the parametrization of the catalytic coe ffi cients. We set ξ to be uniformly distributed over the unit square: ξ ∼ U [0 , . Then, we fix γ TPS ( ξ ) = − ξ and γ ref ( ξ ) = − ξ , such that γ TPS ( ξ ) and γ ref ( ξ ) are independent,identically distributed, and follow log-uniform distributions with range [10 − , P ( ξ | M ) ∝ L opt ( M | γ ( ξ )) P ( ξ ) , P ( ξ ) = , ξ ∈ [0 , , , otherwise . (7)We seek to construct a surrogate of the optimal likelihood L opt ( M | γ ( ξ )) with this parametrization. In particular, wedecided to proceed with the log-likelihood instead of the likelihood as it ensures the positivity of the approximation.More precisely, we aim for a surrogate model of Y ( ξ ) defined by Y ( ξ ) . = log (cid:16) L opt ( M | γ ( ξ )) (cid:17) . Below, we propose to use a Gaussian Process (GP) model to approximate Y ( ξ ). GP models [37] have been widely used in uncertainty propagation, sensitivity analysis, optimization and inverseproblems [38]. Due to their statistical nature, a GP provides a measure of the uncertainty (variance) in the prediction.The main premises of the GP lay in the assumption that the function to be approximated is the realization of a Gaussianprocess characterized by its mean µ ( ξ ) and two-point covariance C GP ( ξ , ξ (cid:48) ) function. Then, from the observation ofthe function values Y ( i ) at the sample points ξ ( i ) , one can derive the posterior distribution of the GP model [37], andevaluate the GP mean and variance at any new point ξ . The selection of the prior of the GP model is a crucial step. Inthis work we tested several zero-mean, stationnary processes with covariance functions from the Matern’s class [39];we found that the log-likelihood function is well approximated using the standard isotropic squared exponential kernel,given that both catalytic parameters play the same role in the likelihood. The covariance function then reads C GP ( ξ , ξ (cid:48) ) = σ exp − L ( ξ − ξ (cid:48) ) T ( ξ − ξ (cid:48) ) , (8)where L GP and σ are the a priori correlation length and variance of the GP. All results presented hereafter use thecovariance function in (8). Denoting Y = ( Y (1) · · · Y ( p ) ) T the vector of observations, the posterior mean of the GPmodel, or the best prediction of Y ( ξ ) is E (cid:2) Y GP ( ξ ) (cid:3) = k T ( ξ ) K − Y , (9)where the vector k ( ξ ) and matrix K are given by k i ( ξ ) = C GP ( ξ , ξ ( i ) ) , K i , j = C GP ( ξ ( i ) , ξ ( j ) ) + σ (cid:15) δ i , j , where δ i , j is the Kronecker symbol. The variance of the prediction is V (cid:2) Y GP ( ξ ) (cid:3) = C GP ( ξ , ξ ) − k T ( ξ ) K − k ( ξ ) . In the expression of the matrix K above, σ (cid:15) corresponds to the error on the estimation of Y ( i ) . As σ (cid:15) → E (cid:104) Y GP ( ξ ( i ) ) (cid:105) → Y ( i ) , while V (cid:104) Y GP ( ξ ( i ) ) (cid:105) vanishes at all observation points. In practice σ (cid:15) is set to a small but non-zerovalue to ensure that the matrix K is invertible. Concerning the parameters of the covariance function, σ and L GP ,they are inferred from the observations through the maximization of their resulting joint log marginal likelihood [37].7 .3. Posterior sampling Sampling the posterior distribution of ξ , and so generate samples of the catalytic coe ffi cients, is made possiblethanks to the construction of a surrogate model which is very cheap to evaluate. To sample from the posterior dis-tribution we use the Metropolis-Hastings MCMC algorithm. The algorithm consists in a sequence of steps, wherea random move from the current step ξ i to ξ ∗ is proposed. Denoting r . = P ( γ ( ξ ∗ ) | M ) / P ( γ ( ξ ) | M ) the ratio of theposteriors, the move is accepted ( ξ i + = ξ ∗ ) with a probability min(1 , r ), otherwise ξ i + = ξ i . The algorithm used inthe present work relies on Gaussian increments ξ ∗ − ξ i . The covariance matrix of the proposal distribution is definedthrough C Prop = s C Post , (10)where C Post is the posterior covariance of ξ and s = . / d is a scaling factor dependent on the dimension of thesampling space (here d = ξ is not known, it is progressively estimated from thesamples drawn during an initial “burn-in” stage of chain [40]. The scaling factor is selected to ensure an acceptancerate varying between 20 and 50% by following Roberts et al [41], and ensure a su ffi ciently fast decorrelation of thechain. Figure 2 illustrates the di ff erent elements constituting our Bayesian framework. Figure 2: Bayesian inference framework in a nutshell
4. Case study
The methodology presented in the previous sections is used for a real case of plasma wind tunnel testing. Thiscase is used to assess the validity and possible shortcomings of the approach.
The experimental run used for this case study is depicted in Table 1. Uncertainties are involved in two di ff erentprocesses. The most natural kind of uncertainty that we can characterize is intrinsic to the measurement device and8ts accuracy at measuring. On top of this, there is the measuring uncertainty, where, ideally, one could perform a mea-surement infinite times and perform statistics on that, extracting the relevant parameters of the resulting distribution.As we do not have infinite time or resources, a t-student distribution is assumed and the sample Gaussian is correctedby the t-factor. Overall, we account for both sources of uncertainties as the squared sum. It is important to remarkthat the quantities considered for both q refw and q TPSw are derived from the measurement of the material emissivities asalready explained in Sec. 2.1.
Table 1: Experimental data and uncertainties considered in our case study. Data taken from [6]
Experiment S q refw [kW / m ] T refw [K] P s [Pa] P d [Pa] q TPSw [kW / m ] T TPSw [K]Mean ( µ ) 195 350 1300 75 91.7 1200Std deviation ( σ ) 6.5 11.7 1.3 1.5 3.05 40 We run the optimization algorithm in a uniform grid of 176 points on the space of the log ( γ ref ) and log ( γ TPS )variables. This uniform grid is chosen slightly asymmetric, 11x16 points, repectively. This choice for an initial gridgives us better refinement on the log ( γ TPS ) direction. From physical considerations, we expect the variability ofthis parameter to be greater on the posterior than log ( γ ref ). Therefore, providing a more refined grid on log ( γ TPS )could produce better approximations for less cost than a squared grid of that size. Fig. 3 shows the log-likelihoodfunction evaluated at these grid points. These evaluations are then used to construct the surrogate approximationby transforming the physical variables log ( γ ref ) and log ( γ TPS ) into their respective canonical counterparts ξ asexplained in Sec. 3.2. Figure 3: Log-likelihood function evaluated on the chosen γ ref , γ TPS grid
Overall, the shape of the log-likelihood function falls from the compatibility of the given pair of γ ref and γ TPS withthe observed quantities measured in the plasma wind tunnel. In general, for large values of γ ref , log-likelihood valuestend to be larger. The same happens with γ TPS for low values. This is already hinting at the fact that higher catalyticactivity is expected for the reference material than for the protection material in question, for the given boundary layeredge conditions. On top of that, there is a range of values for γ ref and γ TPS that represent the best agreement with the9xperimental data. This set of values have to be interpreted jointly: for high γ ref values, γ TPS can only take valuesin a narrow range placed at the middle of its spectrum. For low values of γ ref (mid-spectrum), γ TPS can take downto the minimum value of 10 − . For large values of γ ref and low values for γ TPS , the heat flux, which is the quantityin the likelihood sensitive to our choice of catalytic parameters, is not sensitive enough to changes in those specificranges. We explore later in Sec. 5 how this limitation set by the physical model can be overcome by modifying theexperimental methodology.As already mentioned previously in Sec. 3.2, we need to properly capture all these features of the log-likelihoodwith a surrogate model. We propose to use a GP surrogate. One of the advantages of using GP is that it gives us anestimation of the predictive variance. Fig. 4 shows the normalized L error norm for the GP surrogate on a validationset with 10% of the available points plotted against the number of training points. We carry out this procedure 1,000times with di ff erent validation sets each time. The results show the mean and the 95% confidence interval of thecomputed error. The approximation falls below a 1% error on the validation set as the number of training points getscloser to 160. In practice, we use all model evaluations (176) to construct our GP, knowing that the approximation isalready good enough. Furthermore, this approximation obtained has a maximum predictive standard deviation of 1%.Fig. 5 shows the apparent good agreement between the mean value predicted by the GP and the data points computedfor the log-likelihood in Fig. 3. Figure 4: Normalized L error norm of the GP approximation with varying number of training points We perform a MCMC sampling for the choice of GP surrogate. The chains obtained are depicted in Fig. 6. Wecan see that the chains present no long-term correlation and mix well.The posterior samples obtained are shown in Fig. 7. In general, the tendency of the samples is to remain in anarrow area of the γ TPS space when γ ref takes large values. Once γ ref starts moving towards lower values this tendencyis reversed and γ TPS can take values in a wider range while γ ref is confined in a narrow region. This joint behaviorfalls from the inference framework. The key variable here is the boundary layer edge enthalpy H δ which is sharedbetween both materials tested (reference and TPS). When the model takes up large values for γ ref , a large amount ofthe observed heat flux for the reference material is explained in the model through the magnitude of this parameter,setting low the influence of the enthalpy H δ . Low enthalpy needs larger γ TPS values to account for the observed heatflux on the protection material surface. The same happens for low values of γ ref and γ TPS . In this case, the values thatlay interior to the shape defined by the posterior samples are not in agreement with observations for the reason justexplained: large γ ref needs large γ TPS . The fact that “large” and “low” are also defined within a range (e.g not more10P mean GP mean profile
Figure 5: GP surrogate comparison with the exact log-likelihood values in logarithmic variables
Figure 6: Chain obtained with 1,000,000 steps and 15,000 steps (right) than ∼ − . for γ TPS and not less than ∼ − for γ ref ) is not imposed by the inference problem setting but by thephysics-based model which makes some assumptions regarding the chemical nature of the flow. Some values of γ ref and γ TPS could not explain, under the same H δ , the observations. Overall, this behavior will naturally reflect on themarginal posterior distributions depicted in the next subsection. The posterior marginals are reported below in Fig. 8. We can observe that the distributions of both γ ref and γ TPS drop to small values at both ends of the spectrum, reducing the support from the prior distributions proposed. Thissatisfying behavior can be explained by the proposed likelihood form, which uses all the available measurements toaccess the fitness of the model predictions. As a result, the formulation predicts that the values of H opt δ that couldexplain the whole set of measured fluxes, temperatures and pressures, are actually far away from the maximumlikelihood points when γ ref (cid:28) γ TPS reaches large values. It is also important to notice that both distributionshave well-defined peaks for γ ref (cid:39) .
016 and γ TPS (cid:39) .
01. The ranges of values observed in our calibration for both11 igure 7: Joint posterior samples of the MCMC algorithm gammas fit perfectly with the model previously assumed by the experimentalists where the reference parameter takeshigher values than the catalytic parameter of the protection material. It is also important to emphasize the fact thatin this framework no assumptions are made concerning γ ref , which is estimated along with the protection materialparameter with no di ff erences in their prior knowledge. It can be suggested that a deeper experimental study canprovide more insights to the behavior of the reference material and a di ff erent prior can be defined for the sameanalysis where di ff erences in knowledge between the two probes can be then accounted for. Figure 8: Posterior marginals obtained for γ ref and γ TPS
The statistics associated with these distributions are gathered in Table 2. The di ff erences between these resultsand the outcomes of [10] are clear when looking at the values in the table and the shapes of the distribution functionsobtained. A reduction of almost 20% of the standard deviation and 40% of the Coe ffi cient of Variation (CV) for the12atalytic parameter of the reference material is observed. There is no reporting of the posterior statistics for γ TPS in[10]. The capability of learning γ TPS from experimental data is lost without any particular treatment of the nuisanceparameters in the formulation of the inference problem.
Table 2: Comparison of the posterior statistics for experiment S with the work of [10] Experiment S Mean ( µ ) Std dev. ( σ ) Max. A Posteriori (MAP) CV [ σ/µ ] γ ref γ TPS
Experiment S from [10] γ ref γ TPS - - - -The distributions of the optimal parameters are also computed. For each of these quantities, a GP surrogate is com-puted on the same γ ref , γ TPS grid than the log-likelihood. The resulting posterior samples of γ ref and γ TPS are used asinput for these surrogates, obtaining the distributions of the optimal parameters shown in Fig. 9. A bimodal distribu-tion is obtained for the enthalpy H opt δ . The shape of this distribution is a direct result from the optimization algorithmthat computes the H opt δ , where many of its best points (“best” meaning the ones which maximize the likelihood) fallinto two di ff erent group of values, decreasing the probability density among them. Figure 9: Distributions of the optimal nuisance parameters after propagating the posterior of γ ref and γ TPS
To understand this better, first we need to take a look back at Sec. 2. In that section, we explain the foundations ofthe physical phenomena behind this problem. The physical system, represented by the BL code, computes the function q w = q w ( H δ , P s , P d , T w , γ ). For each P s , P d , T w and q w , the system relates the enthalpy H δ and the catalytic parameter13 through an S-shaped curve (see Fig.10). During the optimization, the physics allow the S-shaped curves to movewhen di ff erent parameters change, in this case, the resulting heat flux q w and the wall temperature T w (Fig.10). Thepressure quantities play a minor role due to their small uncertainties and the fact that both curves move together whenthese quantities change, being common for both materials. Figure 10: Change on the S-shaped curves positions due to changes in heat flux ( q w ) or wall temperature ( T w ) It is also important to take into account that we have information about all the nuisance parameters but the enthalpy H δ which is not measured, therefore, all the other nuisance parameters try to be close to their measured values as aresult of the optimization. The lack of information about H δ gives more uncertainty in the resulting H opt δ . In turn, wecan think of the optimization algorithm as looking for the optimal H δ while keeping the other nuisance parametersvery close to their measured values (within their prescribed standard deviation).Fig. 11 shows the inner workings of the optimization procedure in terms of the physical relations. The thick solidlines represent the S-shaped curves for the reference and TPS material when taking the mean measured values of allthe nuisance parameters and the heat flux. The dashed lines represent a change of heat flux from the values of thethick solid lines. The thin solid lines represent the final optimal configuration for the given γ ref and γ TPS (verticaldashed lines) for which a change of wall temperature T w is added to the change of heat flux, transforming the originalcurves (thick solid lines) to the final optimal curves (thin solid ones). At the pair of γ ref and γ TPS where the two thickS-shaped curves have the same H δ , the algorithm finds the model to agree perfectly with the experiments. For the pairof γ ref and γ TPS given in Fig. 11 as an example, the reference and TPS material do not share the same H δ for theircorresponding mean measured values (thick S-shaped curves). The optimization algorithm finds an optimum H opt δ which represents a trade-o ff between the deviations in wall temperatures T w and heat fluxes q w with respect to theirmeasured values over their uncertainty range σ . As the deviation needed to find a common H δ point for both curvesincreases, the value of log (cid:0) L opt ( M | γ ( ξ )) (cid:1) decreases. In turn, the algorithm performs this for every pair of γ ref and γ TPS in our grid, defining the most likely values for the catalytic parameters in light of the experimental data.As a result, the points sampled by the MCMC algorithm are shown hereafter in Fig. 12. We can appreciate howthe points which maximize the likelihood are the ones falling in the range where both S-shaped curves coincide inenthalpy levels. These points represent the best agreement of the system response with respect to the experimentaldata. This logic explains the bimodal distribution for the enthalpy and the rest of the optimal parameters.It is an important exercise to put these results in perspective. We are able to relax some assumptions in ourmodel (catalytic behavior of the reference material) and propose a new functional relationship through the inferenceframework (optimal likelihood function) but there are other assumptions that remain highly uncertain in within themodel. One contributor to such uncertainty is the chemistry of the gas. The chemical state of the gas poses epistemicuncertainties given that di ff erent models exist in the literature and are widely adopted. Specifically, the speed of thedi ff erent reactions considered can play a role in the inference, given that a flow in chemical equilibrium or frozen canproduce very di ff erent heat fluxes under the same edge conditions. Catalytic activity can also be relegated to be non-influential in the heat flux experienced by the material if the gas chemistry has already taken all the energy contained14 igure 11: Inner workings of the optimization algorithm in terms of the S-shaped curvesFigure 12: Posterior samples along the S-shaped curves in the flow and this is likely to occur under higher pressure conditions than the case considered. Nevertheless, thechemistry should be calibrated in dedicated experiments to obtain reliable predictions in the future, as it can impactwhether or not the chosen model can explain the experimental data, and this, in turn, influences the calibrated γ TPS obtained. For gas chemistry calibration, dedicated spectroscopic analyses should be included to have conclusiveresults and be able to learn something about the chemistry of the gas. The experimental data considered in this workwould not be enough to make any statement about surface catalysis and chemistry of the gas as it does not provideenough information in itself. The development of this Bayesian framework o ff ers a starting point for future studiesfor which the experimental data can be thoroughly exploited.Coupled with the gas chemistry is another uncertain assumption, the thermodynamic state of the gas. Even thoughthis assumption has been validated using spectroscopic measurements [42] for the test conditions considered, recent15umerical investigations [43] suggest that LTE may not hold under di ff erent conditions (e.g. lower mass flows). Amore extensive use of this framework with dedicated experimental campaigns can help shed light on these issues inthe future.
5. Assessment of experimental methodologies
The developed inference methodology can help underpin the important characteristics of testing for catalytic phe-nomena in TPS, namely, the conditions and / or configurations that can give the most information by decreasing theuncertainty to a minimum. We assess the role of the auxiliary material used for testing, referred until now as “refer-ence” material. As extending the testing methodology to include three di ff erent materials is already a possibility [44],we study the information gain with this methodology with synthetic data. In this section, we discuss how choosingdi ff erent auxiliary materials and performing experiments with more probes impact the outcome of the inference. Apart from the di ff erent testing conditions that can be set for a given experiment (power of the facility, staticpressure in the testing chamber, mass flow and probe geometry), we also have the freedom of choosing an auxiliarymaterial with which to gather information about the boundary layer edge conditions.As it was explained in previous works [10] and in this work (Sec. 3), one fundamental uncertainty in the way ofrebuilding the TPS catalytic behavior is the fact that the boundary layer edge conditions cannot be estimated accuratelyif the auxiliary material behavior is not a priori well-known, which is the present case. We explore an assessment ofthis argument by assuming a high catalytic material (more than the conventional reference of copper) which resemblesthe catalytic response of a hypothetical probe made with silver [44]. We devise synthetic data where the heat flux thatthe auxiliary material would measure is higher than the previously considered copper, while still keeping the samewall temperature. This way, the only variable that changes from the case of copper to this synthetic case is the catalyticactivity at the hypothetical auxiliary material surface. Table 3 shows the data used to simulate this particular case.Results from the inference are depicted in Fig. 13 with the marginal posterior distributions.The distributions show the same features than the case study (Fig. 8) with reduced support and well-defined peaks.We can appreciate that both the supports of the synthetic silver and the TPS are further reduced from the one in Fig.8 giving a slightly better characterization of these properties. Table 3: Synthetic data and uncertainties
Experiment S Ag q Agw [kW / m ] T Agw [K] P s [Pa] P d [Pa] T TPSw [K] q TPSw [kW / m ]Mean ( µ ) 232 350 1300 75 1200 91.7Std deviation ( σ ) 7.7 11.7 1.3 1.5 40 3.05To assess the information gain with this particular testing, we need to turn to the enthalpy of the plasma flow andsee if we manage to capture this information better with synthetic silver. Figure 14 shows the distribution for theoptimal enthalpy. In this case, it is easy to spot the benefits of changing the auxiliary material to a higher catalyticone. The support is greatly reduced when comparing Fig. 9 and Fig. 14. This means that the recovery of the boundarylayer edge information is better in the latter case. More information is contained in that experiment than in our casestudy. Still the characterization of the boundary layer edge conditions could be further improved as it should convergeto a unimodal distribution. The characterization of catalytic behavior can be further studied with a testing methodology that uses two auxiliarymaterials instead of one. The information brought by this additional probe is expected to improve the characterizationof the boundary layer edge conditions. For the following case study, the three probes seen in this work (ref, TPSand synthetic Ag) are used under the same boundary layer edge conditions to infer their catalytic behaviors and thecorresponding enthalpy. This synthetic test case lets us extrapolate the benefits of this methodology to more general16 igure 13: Marginal posteriors obtained for the TPS material and synthetic silverFigure 14: Optimal enthalpy H opt δ distribution obtained by propagating the catalytic parameters posterior of the TPS material and synthetic silver cases, prescribing the best practice to reduce the uncertainty on the characterization of catalytic parameters of TPSmaterials.Fig. 15 shows the marginal posterior distributions obtained. We can observe that both the TPS and synthetic silverdistributions are left almost unchanged from the case where they were tested together (Fig. 13), although the tail ofthe synthetic silver distribution shows some growth compared to the previous case. The most notable di ff erence is theposterior distribution of the reference material. The presence of a higher catalytic material increases the informationobtained for higher values of the catalytic parameters, reducing, as a consequence, the support for high catalytic valuesof the reference material. In return, this information gain produces a well-defined peak with a further reduced support.17 better characterization of the copper calorimeter is therefore achieved this way. Figure 15: Marginal posteriors obtained for the TPS material, reference material and synthetic silver
Table 4 depicts the summary statistics of the reference copper and TPS parameters. When compared to Table 2we can see that the mean value for the reference probe is moved towards lower catalytic values as well as the meanfor the TPS. The standard deviation is decreased significantly for the reference probe ( − − ff ered overall less change. Table 4: Posterior statistics for the experiment S Ag including the reference copper calorimeter Experiment S Ag Mean ( µ ) Std dev. ( σ ) MAP CV [ σ/µ ] γ ref γ TPS ff erent materials andthe previous case with TPS material and synthetic silver is the same and the same support for the optimal enthalpy isretrieved. The materials laying in the extremes of the catalytic spectrum are the ones carrying the information aboutthe boundary layer edge conditions. A closer look at Fig. 17 when compared to Fig. 12 reveals the fact that thematerial with a catalytic behavior in between the other two is the best characterized using this methodology. In thisregard, the best outcome would be to find a lower catalytic material than the TPS to achieve a better characterizationof the material of interest while using copper or silver as the high catalytic auxiliary material.
6. Concluding remarks and outlook
In this work, we propose a novel Bayesian inference formulation for the calibration of the catalytic parametersof reusable thermal protection materials. The calibration gives estimates of the material catalytic parameter throughits posterior probability distribution which can be disseminated for uncertainty propagation analysis. In plasma wind18 igure 16: Optimal enthalpy H opt δ distribution obtained by propagating the catalytic parameters posterior of the TPS material, reference materialand synthetic silver Figure 17: Posterior samples on the S-shaped curves for the three tested materials tunnel experiments, the characterization of the reference material behavior plays an important role. In this dedicatedframework we disregard the assumption of having a well-characterized reference material, as proven to be in conflictwith the respective literature in many cases. The Bayesian approach allows for the simultaneous computation of bothmaterials in the inference process which proves to be more accurate than the conventional sequential approach asshown by Sanson et al.Our main contribution is the methodology itself. We derive a likelihood function by considering an optimizationproblem in the nuisance parameters space, reducing the dimensionality of the likelihood to just the quantities ofinterest γ ref , γ TPS . To cope with this computationally demanding likelihood, we propose the use of a surrogate model.GP works quite well for this problem yielding good results with low standard deviations on the chain samples, which19epresent a good estimation of the error in the surrogate approximation for the posterior samples. In addition, theapproach is robust, in the sense that the MCMC sampling method works smoothly for any given conditions. Overall,the optimization formulation presented has the impact of improving considerably the inference results by giving moreconsistent and accurate posterior distributions of the catalytic parameters when compared with the results of [10]. Themain di ff erences being the reduced support and well-defined peaks of the respective posteriors. A more detailed studyof the posterior distributions for the case study shows a decrease of 20% in the standard deviation with respect to theprevious work. Subsequently, it is possible to say that the catalytic parameters can be e ff ectively learned from theexperimental conditions and under the considered model assumptions.The study of di ff erent testing methodologies shows that di ff erent auxiliary materials have an impact on the infor-mation recovered for the free stream enthalpy. This information gain reduces the standard deviation of the catalyticparameters posterior. On these lines, we also study a 3-probes testing methodology and show an overall improvementof the characterization of the reference material up to 80% with respect to the results of the case study. The 3-probestesting methodology reveals that the ideal possible testing scenario is with three materials where a good characteri-zation is achieved for the one in the middle of the catalytic spectrum. In general, the results achieved by applyingthis framework help in the discussion about the testing methodologies and the experimental conditions. The mostinformative testing methodology, combined with the computation of the optimal testing conditions can lead to theproper design of experiments for such thermal protection system.In the future, dedicated experimental campaigns, including spectroscopic measurements, can benefit from thiswork by exploiting the experimental data more thoroughly and adopting the most informative testing methodology.This can help shed light on highly uncertain assumptions as the thermo-chemical state of the gas upon which toimprove our model predictions. Declaration of competing interests
The authors declare that they have no known competing financial interests or personal relationships that couldhave appeared to influence the work reported in this paper.
Acknowledgements
This work is fully funded by the European Commission H2020 programme, through the UTOPIAE Marie CurieInnovative Training Network, H2020-MSCA-ITN-2016, Grant Agreement number 722734.
References [1] B. Laub, E. Venkatapathy, Thermal protection system technology and facility needs for demanding future planetary missions, in: Proceedingsof international workshop on planetary probe atmospheric entry and descent trajectory analysis and science, 2004, pp. 239 – 247.[2] J. D. Anderson, Hypersonic and high-temperature gas dynamics, second ed., AIAA Education Series, 2006.[3] I. Chorkendor ff , J. W. Niemantsverdriet, Concepts of Modern Catalysis and Kinetics, 3rd Edition, Wiley, 2017.[4] A. Turchi, P. M. Congedo, T. E. Magin, Thermochemical ablation modeling forward uncertainty analysis Part I: Numerical methods ande ff ect of model parameters (2017), International Journal of Thermal Sciences 118 (2017) 497–509.[5] A. Mariotti, L. Siconolfi, M. Salvetti, Stochastic sensitivity analysis of large-eddy simulation predictions of the flow around a 5:1 rectangularcylinder (2017), European Journal of Mechanics B / fluids 62 (2017) 149–165.[6] F. Panerai, O. Chazot, Characterization of gas / surface interactions for ceramic matrix composites in high enthalpy, low pressure air flow(2012), Materials Chemistry and Physics 134 (2-3) (2012) 597–607.[7] L. Weiping, L. Xiaoyan, H. Faming, F. Mingfu, Prediction of soil water retention curve using bayesian updating from limited measurementdata (2019), Applied Mathematical Modelling 76 (2019) 380–395.[8] J. Emery, M. Grigoriu, R. F. Jr, Bayesian methods for characterizing unknown parameters of material models (2016), Applied MathematicalModelling 40 (2016) 6395–6411.[9] A. Wirgin, Influence of nuisance parameter uncertainty on the retrieval of the thermal conductivity of the macroscopically-homogeneousmaterial within a cylinder from exterior temperature measurements (2015), Applied Mathematical Modelling 39 (2015) 5278–5298.[10] F. Sanson, F. Panerai, T. E. Magin, P. M. Congedo, Robust reconstruction of the catalytic properties of thermal protection materials fromsparse high-enthalpy facility experimental data (2018), Experimental Thermal and Fluid Science 96 (2018) 482–492.[11] B. Bottin, O. Chazot, M. Carbonaro, V. V. der Haegen, S. Paris, The VKI Plasmatron Characteristics and Performance, in: MeasurementTechniques for High Temperature and Plasma Flows, RTO-EN-8, 1999, pp. 6–1, 6–24.[12] B. Helber, Material response characterization of low-density ablators in atmospheric entry plasmas, Ph.D. thesis, VUB / VKI (2016).[13] J. H. Ferziger, H. G. Kaper, Mathematical Theory of Transport Processes in Gases, North-Holland Publishing Company, 1972.