Propagation and reconstruction of re-entry uncertainties using continuity equation and simplicial interpolation
PPropagation and reconstruction of re-entry uncertainties usingcontinuity equation and simplicial interpolation
Mirko Trisolini ∗ Polytechnic of Milan, Milan, 20133, Italy
Camilla Colombo † Polytechnic of Milan, Milan, 20133, Italy
This work proposes a continuum-based approach for the propagation of uncertainties in theinitial conditions and parameters for the analysis and prediction of spacecraft re-entries. Usingthe continuity equation together with the re-entry dynamics, the joint probability distributionof the uncertainties is propagated in time for specific sampled points. At each time instant,the joint probability distribution function is then reconstructed from the scattered data usinga gradient-enhanced linear interpolation based on a simplicial representation of the statespace. Uncertainties in the initial conditions at re-entry and in the ballistic coefficient for threerepresentative test cases are considered: a three-state and a six-state steep Earth re-entry anda six-state unguided lifting entry at Mars. The paper shows the comparison of the proposedmethod with Monte Carlo based techniques in terms of quality of the obtained marginaldistributions and runtime as a function of the number of samples used.
I. Introduction T he study and prediction of re-entry trajectories is a challenging and complex task. Several factors may influencethe accuracy of these predictions, such as the knowledge of the initial entry conditions, the ballistic coefficientof the satellite, and the density of the atmosphere. The uncertainties associated with these parameters can affect theevolution of the re-entry trajectory, and influence the prediction of several quantities of interest such as the impactlocation, and the mechanical and thermal loads on the spacecraft. It is thus important to include statistical verificationinside the mission design process, considering the effects of uncertainties in assessing possible off-nominal scenarios.Combining this analysis with operational constraints and safety margins can ultimately improve the safety and robustnessof mission designs. This type of statistical analysis can be applied to several aspects of the re-entry problem. Thedesign of crewed vehicles and their navigation algorithms can strongly benefit from an uncertainty analysis given thestrict requirements on the sustainable loads and on the landing location [1–5]. The design of exploration probes and Presented as paper AAS 19-409 at the 29th AAS/AIAA Space Flight Mechanics Meeting, Ka’anapali, HI, 13-17 January 2019 ∗ Postdoc Research Fellow, Department of Aerospace Science and Technology, via La Masa 34, 20156, Milan, Italy. † Associate Professor, Department of Aerospace Science and Technology, via La Masa 34, 20156, Milan, Italy. a r X i v : . [ c s . C E ] J a n ample return missions can also gain from uncertainty quantification to improve the robustness of the mission [6–8]with the aim to increase the landing mass and precision and to provide more robust designs for the thermal protectionand parachute deployment systems. Finally, even the destructive re-entry of satellites and the prediction of their re-entryfootprint, in particular for uncontrolled re-entries can benefit from uncertainty assessment. Until 2013, there has beenan average of 93 uncontrolled re-entries per year [9] and with the increasing space activities of the past few years,the frequency of re-entering objects is destined to grow rapidly. Despite these objects usually pose a marginal riskfor people on the ground, it is still necessary to verify their compliance with the casualty risk regulations [10]. Thisultimately means studying the break-up of the spacecraft and the demise of its parts [11–13]. Considering the effects ofuncertainties, it is possible to assess the demise probability of specific components [14] and the statistical distribution ofthe casualty area of surviving components.The traditional procedure to assess uncertainties is based on a Monte Carlo (MC) dispersion analysis, where,through a large number of simulations over randomly sampled initial conditions and parameters, the joint ProbabilityDensity Function (PDF) is estimated through a frequentist approach. This type of simulation provides a reliable way toestimate the evolution of uncertainties given their straight-forward implementation and they can effectively capturesnonlinearities in the system. However, in general, to obtain convergent statistics, the number of MC simulations mustincrease. Consequently, for multi-dimensional state spaces and nonlinear dynamics, such as the ones associated withre-entry scenarios, they can become expensive [15, 16]. Nonetheless, many space missions’ uncertainty analyses havebeen carried out using a Monte Carlo approach. Examples are the mission design of the Mars Pathfinder [6], of theMars Science Laboratory [7] missions, and of the future Mars2020 exploration mission [8]. Simulation softwaresuch as the NASA Dynamics Simulator for Entry, Descent, and Surface Landing (DESENDS) [17] uses a MonteCarlo-based dispersion analysis. In addition, MC simulations have been extensively used for the design verification ofentry guidance algorithms [2, 3, 18–20]. In these works, the performance, robustness, and reliability of the algorithmis tested considering a dispersion in the initial conditions and parameters of the re-entry. The results of the MCsimulations in terms of mean and standard deviation are compared to the target for the landing accuracy, and the numberof occurrences for violation of specified safety thresholds is considered. Also destructive re-entry codes such as ESADRAMA [13] perform re-entry analyses using an MC approach for the design and verification of satellite compliance tothe casualty risk regulations.Instead of MC simulation, several other techniques have been applied to the propagation of uncertainties [16]. Forexample, Unscented Transformation (UT), where only a few, deterministically chosen, samples are propagated (thesigma points). From these samples is then possible to obtain a second order approximation of the first two moments ofthe probability distribution. Other methodologies are instead based on the creation of a surrogate model that can moreefficiently estimate the uncertainty when compared to MC [21–23]. An example of such methodologies is PolynomialChaos Expansion (PCE) [24, 25], where the inputs and outputs of a system are represented via series approximation.2hey provide an efficient way to propagate uncertainties building an explicit functional representation of the outputuncertainty with respect to the inputs. These techniques are computationally efficient, even in high dimensions, eventhough they rely on the estimation of the uncertainties through a frequentist approach as MC does. Another commonuncertainty propagation technique is based on Differential Algebra (DA) [26], which allows the computation of arbitraryorder expansion with respect to the initial condition. DA can be used in conjunction with MC simulations, substitutingthem with Taylor expansion. Gaussian Mixture Models (GMM) [16] can also be used to propagate uncertainties. In thismethod, the nonlinearity and non-Gaussian behavior of the probability density function is captured by using multipleGaussian distributions. However, the effects of nonlinearities have to be considered by possibly splitting and mergingthe different components of the mixture.The approach we propose uses the continuity equation to directly propagate the probability density along withthe dynamics of the system, thus obtaining a systematic evolution of the probability density. This methodology hasbeen applied in the study of the dynamical evolution and formation of stellar systems, planetary ring structures, andinterplanetary dust [27–29]. Additionally, it has been used for the propagation of space debris fragments following abreak up of satellites in orbit [30, 31] and for the propagation of uncertainties in planetary re-entries [32–35]. Such amethodology is opposed to MC simulations where the distribution is approximated through many realizations: whilewith MC methods we propagate individual realization of the initial PDF, with a continuum-based approach we propagatethe ensemble of realizations. When not only the first moments of the distribution are needed but its whole shape isof interest, MC simulations require a large number of samples. Instead, the proposed continuum-based propagationmethod allow the knowledge of the probability density at specific points in the state space can enable a reconstruction ofthe probability density function with a reduced number of samples.The challenge of using a method based on the continuity equation is the post-processing of the data. As we arepropagating a finite set of initial points, with their probability density, it is then necessary to reconstruct this density inthe state space starting from discrete values. In [32] a reconstruction methodology has been proposed, which replicatesthe binning process of MC simulations. For each bin, the density is computed as the mean of the density values ofthe data points contained in it. This method can be used if the PDF data is uniformly distributed in each bin in alldimensions and if the enclosed volume of the scattered data in each bin is equal [36]. However, even if the initialdistribution of samples is uniform, its evolution in time does not necessarily remain uniform and the enclosed volumeof the data in each bin does not remain equal. Therefore, such a methodology generates results which present pooragreement with corresponding Monte Carlo simulations [32, 36]. In addition, such a methodology still uses a numberof samples comparable to Monte Carlo methods. We instead propose to reconstruct the density using a simplex-basedlinear interpolation methodology [37] that uses the concept of 𝛼 -shape [38], which adapts to the evolution of the shapeof the state space volume. In addition, we increase the accuracy of the linear interpolation by including the derivativeinformation into the linear interpolation scheme using the reduced order dual Taylor expansion [39].3he paper presents three relevant test cases, which include uncertainties in the initial conditions, in the ballisticcoefficient of the satellite, and in the atmospheric density. The first test case considers a strategic re-entry on Earthusing a three-state dynamics, the second test cases expands the first one to a more complex dynamics, and the third testcase features the lifting re-entry of a probe in Martian atmosphere. The results are presented as one-dimensional andtwo-dimensional marginal distributions of the relevant parameters. The distribution for derived quantities of interest,such as the heat rate and the dynamic pressure, are also presented. The paper is organized as follows: Section IIdescribes the methodology for the propagation of the uncertainties using a continuum-based approach; Section IIIdescribes the methodology used for the reconstruction of the probability density and for the computation of the marginalprobabilities. Section IV presents the application of the density propagation and reconstruction procedure to relevanttest cases. Section V contains the discussion and the conclusions. II. Uncertainty propagation
The proposed methodology uses the continuity equation to propagate the initial joint probability distribution functionrelated to the uncertainties in the initial conditions and parameters and assess its evolution throughout the re-entryprocess under the influence of the re-entry dynamics. The expression for the continuity equations is as follows [28]: 𝜕𝑛 ( x , 𝑡 ) 𝜕𝑡 + ∇ f ( x ) = (cid:164) 𝑛 + − (cid:164) 𝑛 − , (1)where x is the vector of the state variables, 𝑛 ( x , 𝑡 ) is the joint probability distribution function (PDF) at time 𝑡 , f ( x ) represents the forces acting on the system and takes into account slow, continuous phenomena such as gravity andatmospheric drag, and (cid:164) 𝑛 + and (cid:164) 𝑛 − represent the fast and discontinuous events (i.e. sources and sinks). For the case inexam, the source and sink terms were neglected. Knowing the initial density distribution 𝑛 ( x , ) , Eq. (1) allows forthe propagation of the density evolution in time, in a system with the equation of the dynamics. When applied to thepropagation of uncertainties, the density represents the probability distribution. This is a Partial Differential Equation(PDE) with the PDF, 𝑛 ( x , 𝑡 ) , being the dependent variable. Such an equation regulates the conservation of the totalprobability mass of the joint PDF through its spatial-temporal evolution due to the forces acting on the system. Eq. (1)can be solved using the Method Of the Characteristics (MOC), where the partial differential equation is transformed intoa set of Ordinary Differential Equations (ODE). As it is convenient to express the evolution of the re-entry trajectoryusing parameters such as the altitude, the relative velocity, and the flight path angle, we follow the approach to expressthe continuity equation in the state space of the problem in exam, writing the divergence in rectangular coordinates [28].In the generic case of 𝑑 number of variables, Eq. (1) can be re-written in rectangular coordinates as follows: 𝜕𝑛𝜕𝑡 + 𝜕𝑛𝜕𝛼 𝑣 𝛼 + . . . + 𝜕𝑛𝜕𝛼 𝑑 𝑣 𝛼 𝑑 + (cid:20) 𝜕𝑣 𝛼 𝜕𝛼 + . . . + 𝜕𝑣 𝛼 𝑑 𝜕𝛼 𝑑 (cid:21) 𝑛 = 𝛼 𝑖 are the state variables and 𝑣 𝛼 𝑖 the corresponding forces. The sink and source terms have been neglected inthis case. Applying the method of characteristics, the PDE can be reduced to the following system of ODEs: d 𝑡 d 𝑠 = d 𝛼 d 𝑠 = 𝑣 𝛼 ( 𝛼 , . . . , 𝛼 𝑑 ) ... d 𝛼 𝑑 d 𝑠 = 𝑣 𝛼 𝑑 ( 𝛼 , . . . , 𝛼 𝑑 ) d 𝑛 d 𝑠 = (cid:20) 𝜕𝑣 𝛼 𝜕𝛼 + . . . + 𝜕𝑣 𝛼𝑑 𝜕𝛼 𝑑 (cid:21) 𝑛 ( 𝛼 , . . . , 𝛼 𝑑 ) (3)where 𝑠 is the independent variable. In this study, the continuity equation is applied to two sets of equations ofmotion. First, a three-state representation models, which describes the evolution of the re-entry through radius ( 𝑟 ),velocity ( 𝑣 ), and flight-path angle ( 𝛾 ) under the influence of the planet gravity and the atmospheric drag. The dynamicsof this model is described in Eq. (4) and considers a planar motion over an non-rotating planet. (cid:164) 𝑟 = 𝑣 sin 𝛾 (cid:164) 𝑣 = − 𝛽 𝜌𝑣 − 𝑔 sin 𝛾 (cid:164) 𝛾 = (cid:18) 𝑣𝑟 − 𝑔𝑣 (cid:19) cos 𝛾 + 𝛽 𝐶 𝐿 𝐶 𝐷 𝜌𝑣 (4)where 𝜌 is the atmospheric density, 𝑔 is the gravitational acceleration, 𝛽 = 𝑚𝐶 𝐷 𝑆 is the ballistic coefficient, 𝑅 𝑝 is theradius of the planet, 𝐶 𝐿 is the lift coefficient, 𝐶 𝐷 is the drag coefficient, and 𝑆 is the object cross-section. Applying theMOC of Eq. (3) to this set of equations, the variation in time of the probability density, 𝑛 , is obtained as follows: (cid:164) 𝑛 = − (cid:20) 𝜕 (cid:164) 𝑟𝜕𝑟 + 𝜕 (cid:164) 𝑣𝜕𝑣 + 𝜕 (cid:164) 𝛾𝜕𝛾 (cid:21) 𝑛. (5)If uncertainties in additional parameters, other than re-entry states, want to be considered, it is necessary to includethem in the equations of motion and add their contribution to the evolution of the density in Eq. (5). For example, in thecase in exam, we want to consider an uncertainty over the ballistic coefficient, 𝛽 , and include the possibility to take intoaccount uncertainty in the atmospheric density (through an atmospheric correction coefficient, 𝜉 ). The resulting systemof equation for the propagation of the characteristics becomes as follows:5 (cid:164) 𝑟 = 𝑣 sin 𝛾 (cid:164) 𝑣 = − 𝛽 𝜌 ( 𝑟, 𝜉 ) 𝑣 − 𝑔 ( 𝑟 ) sin 𝛾 (cid:164) 𝛾 = (cid:18) 𝑣𝑟 − 𝑔 ( 𝑟 ) 𝑣 (cid:19) cos 𝛾 + 𝛽 𝐶 𝐿 𝐶 𝐷 𝜌 ( 𝑟, 𝜉 ) 𝑣 (cid:164) 𝛽 = (cid:164) 𝜉 = (cid:164) 𝑛 = − (cid:20) 𝑣𝛽 𝜌 ( 𝑟, 𝜉 ) + sin 𝛾 (cid:18) 𝑣𝑟 − 𝑔 ( 𝑟 ) 𝑣 (cid:19)(cid:21) 𝑛. (6)Therefore, the result is the augmented state, ( 𝑟 , 𝑣 , 𝛾 , 𝛽 , 𝜉 ), to be propagated. The second model considered in thisstudy is a six-state representation, which describes the three-dimensional translational re-entry over a rotating Earth. Forthe case in exam, we decided to express the equations of motion using the radius as the independent variable, insteadof the time (de facto obtaining a five-state model). On one side, this allows showing the flexibility of the continuumpropagation and, on the other, it simplifies the representation of the output of the propagation as, for example, it is moreconvenient to extract information at the landing instant, which corresponds to the final radius in the propagation. The setof equations of motion is the following: d 𝜆 d 𝑟 = sin ( 𝜒 ) 𝑟 cos ( 𝜑 ) tan ( 𝛾 ) d 𝜑 d 𝑟 = cos ( 𝜒 ) 𝑟 tan ( 𝛾 ) d 𝑣 d 𝑟 = − 𝑣𝜌 ( 𝑟, 𝜉 ) 𝛽 sin ( 𝛾 ) − 𝑣 (cid:18) 𝑔 𝑟 ( 𝑟, 𝜑 ) + cos ( 𝜒 ) 𝑔 𝜑 ( 𝑟,𝜑 ) tan ( 𝛾 ) (cid:19) + 𝑟 𝜔 𝑝 cos ( 𝜑 ) 𝑣 (cid:18) cos ( 𝜑 ) − cos ( 𝜒 ) sin ( 𝜑 ) tan ( 𝛾 ) (cid:19) d 𝛾 d 𝑟 = 𝛼𝜌 ( 𝑟, 𝜉 ) ( 𝛾 ) − 𝑣 tan ( 𝛾 ) (cid:18) 𝑔 𝑟 ( 𝑟, 𝜑 ) + cos ( 𝜒 ) tan ( 𝛾 ) 𝑔 𝜑 ( 𝑟, 𝜑 ) − 𝑣 𝑟 (cid:19) + 𝜔 sin ( 𝜒 ) cos ( 𝜑 ) 𝑣 sin ( 𝛾 ) ++ 𝑟 𝜔 𝑝 cos ( 𝜑 ) 𝑣 (cid:18) cos ( 𝜑 ) tan ( 𝛾 ) + cos ( 𝜒 ) sin ( 𝜑 ) (cid:19) d 𝜒 d 𝑟 = sin ( 𝜒 ) tan ( 𝜑 ) 𝑟 tan ( 𝛾 ) + 𝜔 𝑝 𝑣 (cid:18) sin ( 𝜑 ) sin ( 𝛾 ) − cos ( 𝜒 ) cos ( 𝜑 ) cos ( 𝛾 ) (cid:19) + sin ( 𝜒 ) 𝑣 sin ( 𝛾 ) cos ( 𝛾 ) (cid:18) 𝑟𝜔 𝑝 sin ( 𝜑 ) cos ( 𝜑 ) − 𝑔 𝜑 ( 𝑟, 𝜑 ) (cid:19) d 𝛽 d 𝑟 = d 𝜉 d 𝑟 = d 𝑛 d 𝑟 = − (cid:20) 𝜕𝜆 (cid:48) 𝜕𝜆 + 𝜕𝜑 (cid:48) 𝜕𝜑 + 𝜕𝑣 (cid:48) 𝜕𝑣 + 𝜕𝛾 (cid:48) 𝜕𝛾 + 𝜕𝜒 (cid:48) 𝜕𝜒 + 𝜕𝛽 (cid:48) 𝜕𝛽 + 𝜕𝜉 (cid:48) 𝜕𝜉 (cid:21) 𝑛 (7)where 𝜆 is the longitude, 𝜑 the latitude, 𝑣 the velocity, 𝛾 the flight-path angle, 𝛽 = 𝑚𝐶 𝐷 𝑆 the ballistic coefficient, 𝛼 = 𝐶 𝐿 𝑆𝑚 a modified lift coefficient, 𝜉 an atmospheric correction coefficient, and 𝑔 𝑟 and 𝑔 𝜑 are the radial and transversalcomponents of the gravitational acceleration, respectively. The expression for the derivative of the density as a functionof 𝑟 was not expanded for a better readability. In this expression, the primes refers to derivatives with respect to theradius. As it is possible to observe, even in this second re-entry model it has been included the possibility to consider6ncertainties in the ballistic coefficient and in the atmospheric density, by including the coefficient 𝛽 and 𝜉 into theextended state of the problem.Eqs. (6) and (7) must be integrated numerically and the integration can be performed using a standard ODE solversuch as Runge-Kutta. In this way, the time evolution of the density in the state space can be obtained as a function of theconsidered independent variable. As the solution for the re-entry problem is not analytical, it is necessary to sample theuncertainty distribution in the initial states and to propagate the trajectory and the probability density for each samplepoint. III. Interpolation method
Once the sampled initial conditions have been propagated using Eq. (3), it is necessary to reconstruct the probabilitydensity in the domain at each time step to obtain the total uncertainty and the marginal distributions. The proposedapproach is integrated with the continuum propagation described in Section II to reconstruct the probability density fromfew samples, leveraging on the knowledge of the value of the density that is propagated alongside the characteristics. Asit can be observed in Fig. 6, during the evolution of the re-entry trajectory, not only the density but also the state-spacevolume changes and deforms. This is why the method proposed in [32], which performs a uniform binning to estimatethe probability density by averaging the its value in each bin, provides inaccurate results when the state space starts todeform. In our methodology, instead, we follow the variation of the state space volume by creating a simplicial (i.e.a generalization of the notion of triangle or tetrahedron to arbitrary dimensions) representation and interpolating thescattered data. Scattered data interpolation is a complex task, and is even more challenging when the considered datahas more than three dimensions as it is for the case in exam. Several techniques exist to interpolate scattered data in 1Dand 2D such as spline interpolation [40], multi-variate polynomial [41, 42], and radial basis function [43]. However,these techniques can be difficult to extend to arbitrary dimensions, or require regular grid. In this work, we proposeto reconstruct the density using linear interpolation based on the Delaunay triangulation [44] of the sampled points.We thus seek to approximate a multivariate function 𝑓 : X → R with X ⊂ R 𝑑 , where the elements of X are denotedby x ≡ ( 𝑥 , 𝑥 , . . . , 𝑥 𝑑 ) . We propose a simplicial-based interpolation given its possibility to be extended to arbitrarydimensions and its capability of allowing for the direct inclusion of derivative information to improve its accuracy [39].Additionally, this methodology preserves the values at the nodes of the triangulation. This is important as it allows theconservation of a crucial information that is the value of the probability density at the sampled points as provided by thecontinuum-based propagation.This section is structured as follows: Section III.A describes the linear interpolation methodology; Section III.Bdiscusses the limitations of the Delaunay triangulation and proposes a different approach using 𝛼 -shapes; Section III.Cproposes an improved interpolation methodology; Section III.D describes the procedure adopted to integrate theprobability density and obtain the marginal distributions. 7 . Linear interpolation As previously mentioned, the linear interpolation we adopt is based on a simplicial representation obtained througha Delaunay triangulation, which is unique for a given set of points. Given its heritage, uniqueness, and capability to beextended to arbitrary dimensions, it has been selected in this work to construct the simplical complex (i.e. the union ofthe simplices forming the triangulated state space) from the propagated scatter data. The construction of the Delaunaytriangulation has been carried out using the python package
Delaunay of the scipy [45] library. Each of the data pointsin the considered state space (e.g. 𝑟 , 𝑣 , 𝛾 , 𝛽 ) represents the coordinates of a vertex 𝑉 of the simplicial complex. At thesame time, the probability densities relative to each vertex are the data values of the function 𝑓 to be interpolated. Ingeneral, the simplicial-based linear interpolation can be expressed as follows: L ( x ) = 𝑁 ∑︁ 𝑖 = 𝜆 𝑖 ( x ) 𝑓 𝑖 , (8)where 𝑁 is the number of vertices of the simplex (i.e. its dimensionality), L ( x ) is the linear interpolation at a point x inside the considered simplex, 𝜆 𝑖 ( x ) is the 𝑖 -th barycentric coordinate of the point x , and 𝑓 𝑖 is the value of the function(the density in our case) at the 𝑖 -th vertex. Once the barycentric coordinates of the vertices of the simplex are found, thelinear interpolation of Eq. (8) can be performed. In other words, the linear interpolation is obtained through a weightedaverage of the value of the function at the simplex vertices, with the weights being the barycentric coordinates of thevertices (i.e. the distances of the vertices form the barycenter of the 𝑑 -dimensional simplex). B. Alpha shapes
A drawback of Delaunay triangulation is that it is based on the convex-hull of a set of points. Therefore, if thereconstructed shape is concave additional unwanted triangles will be generated. For example, Fig. 1a shows a concaveset of test points, which also contains a hole. It can be observed from Fig. 1b that the Delaunay triangulation generatessimplices for the entire convex-hull and fills the hole inside the set of points with additional simplices, which cancompromise the accuracy of the interpolation.Therefore, we introduce the concept of 𝛼 -shape [38]. The 𝛼 -shape, C 𝛼 ( 𝑉 ) of a set of points 𝑉 ⊂ R 𝑑 is a subset ofthe Delaunay triangulation, DT ( 𝑉 ) . The objective of the 𝛼 -shape is to eliminate from the Delaunay triangulation allthe excess simplices that are formed when a shape is not convex. As these simplices tend to be elongate, it is possibleto use a test based on the radius of the circum-hyper-sphere (the equivalent of the circum-radius in 𝑑 dimensions), 𝜎 𝑇 , to prune triangles that are deemed too elongated by setting a threshold on the radius of the circum-hyper-sphere.Specifically, a simplex ( Δ 𝑇 ) belonging to the Delaunay triangulation also belongs to the 𝛼 -shape ifi. 𝜎 𝑇 < 𝛼 and the hyper-sphere of radius 𝜎 𝑇 is empty, orii. Δ 𝑇 is a face of another simplex in C 𝛼 ( 𝑉 ) .5 1.0 1.5 2.0 2.5 x y (a) x y (b) Fig. 1 Example of a concave set of points (a) and of its Delaunay triangulation (b). where 𝛼 is a hyper-parameter, which has to be selected by the user. We present a possible way to select 𝛼 inSection III.B.1. The first fo the presented condition is referred to as the alpha test . Following this definition andexploiting the properties of the Delaunay triangulation it is possible to build the 𝛼 -shape through the following steps asdescribed by Edelsbrunner [38]:1. Compute the Delaunay triangulation of 𝑉 , knowing that the boundary of the 𝛼 -shape is contained in it;2. Determine C 𝛼 ( 𝑉 ) by inspecting all the simplices Δ 𝑇 ∈ DT ( 𝑉 ) : if the circum-hyper-sphere 𝜎 𝑇 is empty and 𝜎 𝑇 < 𝛼 we accept Δ 𝑇 as a member of C 𝛼 ( 𝑉 ) , together with all its faces;3. All 𝑑 -simplices of C 𝛼 ( 𝑉 ) make up the interior of the 𝛼 -shape.Given the aforementioned procedure, the 𝛼 -shape can be constructed starting from the Delaunay triangulation andremoving all the simplices that do not pass the alpha test . As for a 𝑑 -simplex belonging to a Delaunay triangulation thecircum-hyper-sphere is empty by definition [44], the alpha test indicates that the simplex belongs to C 𝛼 ( 𝑉 ) if 𝜎 𝑇 < 𝛼 .Therefore, the test only requires the computation of the radius of the circum-hyper-sphere for a generic 𝑑 -dimensionalsimplex that is [46] 𝜎 𝑇 = 𝑅 = √︄ − CM − 𝛼 -shape algorithm to the set of points of Fig. 1a with an 𝛼 value of 0.25 we get the 𝛼 -shape of Fig. 2,thus improving the shape reconstruction with respect to the Delaunay triangulation of Fig. 1b.The interpolation using the Delaunay triangulation of Eq. (8) can be directly extended to the 𝛼 -shape, the onlydifference being that the simplices used are the one belonging to the 𝛼 -shape and not to the Delaunay triangulation. Togive an example of the linear interpolation using 𝛼 -shape, it is possible to associate a weight to each point of Fig. 1a.For example we can assume that 9 .5 1.0 1.5 2.0 2.5 x y Fig. 2 Example of 𝜶 -shape for the set of test points. 𝑓 ( x 𝑖 ) = 𝑦 𝑖 . (10)To test the interpolation technique, we randomly select ten points from the set, we remove them and we generatethe 𝛼 -shape, we then interpolate the function at the coordinates of the ten test points (Fig. 3a). Finally, we check theroot mean square (RMS) error between the interpolated values and the actual value of the function at these test points.Fig. 3b shows the results of this test for the ten test points, highlighting the RMS percent error. The average error is2.17%, while the maximum is 11.89%. x y Interpolation points f ( x , y ) (a) x y P e r c e n t a g e R M S e rr o r (b) Fig. 3 Interpolated function and selected interpolation points (a) and RMS percent error at the interpolationpoints (b). . Alpha value estimation The value of 𝛼 in the construction of the 𝛼 -shape is a hyper-parameter, which determines the simplices that willbe removed from the initial Delaunay triangulation and, therefore, how closely the 𝛼 -shape will resemble the shapeof the actual state space (Section III.B). To determine the 𝛼 value, it is possible to perform a 𝑘 -fold cross-validation,combined with a stochastic search algorithm. Specifically, we use a Differential Evolution (DE) search method [47] as itproved effective in the optimization of tuning parameters and hyper-parameters [48]. In the 𝑘 -fold cross-validation, theavailable set of points is subdivided into 𝑘 folds. One of the 𝑘 folds is then held out and used as validation set, while theother 𝑘 − 𝑘 − 𝛼 -shape, then the interpolation at the points of the remaining fold is carried out. As we are interested in providing thebest shape approximation for our data-set, we consider two conditions:1. if any of the test points lies outside the 𝛼 -shape constructed with the training data the value of 𝛼 is rejected;2. if all the test points are within the 𝛼 -shape, we compute the overall volume as the sum of the volumes of thesimplices.We repeat this procedure holding out a different fold each time. The output is then the average of the scoresobtained for each fold. Throughout this study, the selected number of folds has been 𝐾 =
5. The result of the 𝑘 -foldcross-validation depends on the value of 𝛼 ; we apply a DE algorithm to find the value of 𝛼 that minimizes the volumeof the 𝛼 -shape computed following the previous points. Specifically, we use the differential_evolution function ofthe Python scipy.optimize library [45]. Differential evolution is a stochastic direct search method in which a vectorof parameters of size 𝑁 𝑝 , the population size, is evolved for a specified number of generations 𝑁 𝑔 to find a globaloptimal solution. The initial population is randomly chosen inside the provided bounds and a uniform distributionfor the parameters in the search space is assumed. In our case, this translates into selecting a value of 𝛼 𝑖 inside theinterval ( 𝛼 𝑚𝑖𝑛 , 𝛼 𝑚𝑎𝑥 ) . The new parameters vectors are generated using the best1bin strategy [49]. For this study themutation rate has been set to 𝜂 𝑚 = . 𝜂 𝑟 = .
7, which are both typical values. Thepopulation size has been set to 𝑁 𝑝 =
40 and the number of generations to 𝑁 𝑔 =
60. The presented strategy, however,can become computationally cumbersome when the dimension of the problem and/or the number of points increases.For example, to obtain the alpha values for the test case of Section IV.A takes around 60 seconds on the laptop used forthe simulations in the study. This computational time can be too long with respect to the other operations performed.Possible mitigation strategies are the use of the same 𝛼 value for different snapshots as long as the state space does notdeforms considerably, the use of the 𝛼 value of the previous snapshot as the starting point for the current one to improvethe convergence, a larger parallelization, and a reduction in the 𝑘 -fold cross-validation. For the cases in which thisprocedure is deemed impractical, alternative procedures can be used to select the 𝛼 value. For example, the distancesbetween each point and its nearest neighbor can be evaluated. Then, the value of 𝛼 can be selected as the average,maximum, minimum, or median value among these distances [50]. For example, the minimum value will tend to discard11ore elongated triangles, but can also exclude parts of the tails of the distribution. The selection among this value canbe performed by testing them on selected points, similarly to the cross-validation previously described. In addition, theycan be used as starting values for the differential evolution algorithm. C. Gradient enhanced interpolation: the reduced dual Taylor expansion
As shown in Fig. 3b, a direct application of the linear interpolation methodology can result in relative errors largerthan 10% for selected points. We can expect that with an increase in the number of dimensions of the problem andcomplexity of the state space also the accuracy of the interpolation may degrade. An error in the interpolation directlymaps into an error in the computation of the integral of the probability density and therefore worsen the approximationof the marginals, especially if large errors concentrate in high-density areas. Therefore, we propose a novel methodologythat integrates supplementary derivative data into the presented interpolation (Eq. (8)) scheme to improve the accuracyof the approximation. The method we propose replaces the value of the function at the simplex vertices 𝑓 𝑖 with its reduced dual Taylor expansion [39]. The procedure applies to all schemes that are based on function values at discretenodes, provided the derivative data is available at the same nodes as the function values. This characteristic perfectlyfits the interpolation procedure we propose, allowing for its extension and enhancement. The 𝑛 -th order reduced dualTaylor expansion of the 𝑚 -th kind D 𝑚𝑛𝑥 is defined as [39]: D 𝑚𝑛𝑥 [ 𝑓 ] : = ∑︁ | 𝜅 |≤ 𝑛 𝜅 ! 𝐶 𝑚𝑛 | 𝜅 | ( 𝑥 − . ) 𝜅 𝑓 ( 𝜅 ) ( . ) , (11)with: 𝐶 𝑚𝑛 | 𝜅 | : = (cid:169)(cid:173)(cid:173)(cid:173)(cid:171) 𝑚 + 𝑛𝑚 (cid:170)(cid:174)(cid:174)(cid:174)(cid:172) − (cid:169)(cid:173)(cid:173)(cid:173)(cid:171) 𝑚 + 𝑛 − | 𝜅 | 𝑚 (cid:170)(cid:174)(cid:174)(cid:174)(cid:172) . (12)where 𝐶 𝑚𝑛 | 𝜅 | are the reduction coefficients. The only difference between a standard dual Taylor expansion and Eq. (11)are exactly these coefficients. The modified expression can be used to raise the order of generic multivariate approximationschemes, provided they use a polynomial approximation of data at specified nodes and that the derivative information isavailable at these nodes.The implementation of Eq. (11) in our linear approximation scheme is straightforward: we justneed to replace the input samples of 𝑓 with the corresponding samples of D 𝑚𝑛𝑥 [ 𝑓 ] , which can be obtained using thesupplementary derivative data. The interpolation scheme of Eq. (8) is then replaced by: (cid:101) L ( x ) = 𝑁 ∑︁ 𝑖 = 𝜆 𝑖 ( x )D 𝑚𝑛𝑥 [ 𝑓 ] ( x 𝑘 ) . (13)The approximation order will then be raised from 𝑚 to 𝑚 + 𝑛 [39]. The presented procedure perfectly adapts to the12hosen methodology for the propagation of uncertainties (Section II). In fact, as we already compute the Jacobian of thedynamics, it is of little effort and computational cost to include the expressions for the derivative of the density withrespect to the integration variables and return it as an additional output of the density propagation. Eq. (14) provides anexample for the dynamics of Eq. (6). 𝜕𝑛𝜕𝑟 = − (cid:18) 𝑣 sin 𝛾𝑟 − 𝜕𝑔 ( 𝑟 ) 𝜕𝑟 cos 𝛾𝑣 − 𝜕𝜌 ( 𝑟, 𝜉 ) 𝜕𝑟 𝑣𝛽 (cid:19) 𝑛 𝜕𝑛𝜕𝑣 = − (cid:18) 𝑔 ( 𝑟 ) cos 𝛾𝑣 − sin 𝛾𝑟 − 𝜌 ( 𝑟, 𝜉 ) 𝛽 (cid:19) 𝑛 𝜕𝑛𝜕𝛾 = − (cid:18) 𝑔 ( 𝑟 ) sin 𝛾𝑣 − 𝑣 cos 𝛾𝑟 (cid:19) 𝑛 𝜕𝑛𝜕𝛽 = − 𝑣𝜌 ( 𝑟, 𝜉 ) 𝛽 𝑛 𝜕𝑛𝜕𝜉 = − 𝑣𝛽 𝜕𝜌 ( 𝑟, 𝜉 ) 𝜕𝜉 𝑛 (14)With this procedure we obtain at each node the required information for a direct application of Eq. (13). Todemonstrate the improvement introduced by the gradient-enhanced interpolation, the test case of Fig. 3a is performedagain, this time using the reduced dual Taylor expansion . For the function of Eq. (10), the derivatives with respect to 𝑥 and 𝑦 are: 𝜕 𝑓𝜕𝑥 = 𝜕 𝑓𝜕𝑦 = 𝑦. (15)Including this information, we can perform the interpolation for the same test points. Fig. 4 shows the resultingRMS percent error, which is close to zero. The introduction of the derivative information has thus strongly improved theresults of the linear interpolation. It has to be noted that this is a particular case: the interpolated function is quadratic(Eq. (15)) and with the reduced dual Taylor expansion we introduce a second order approximation, which matches theorder of the function to be interpolated. D. Marginalization
The density reconstruction using alpha shapes and the reduced dual Taylor expansion can be used to obtain themarginal densities of the relevant variables. In our application, this allows for a better understanding of the re-entryunder uncertainties, predicting how they influence the evolution in time of the re-entry velocity, altitude, flight-pathangle, and derived quantities such the dynamic pressure and heat rate. Given the generic 𝑑 -dimensional state space 𝑃 𝑠 = { 𝑥 , 𝑥 , . . . , 𝑥 𝑑 } , the one-dimensional and two-dimensional marginals can be computed as follows:13 x y P e r c e n t a g e R M S e rr o r
1e 14
Fig. 4 RMS percent error for the interpolation enhanced by the reduced dual Taylor expansion . 𝑚 𝑥 𝑚 = ∫ 𝑃 𝑠 \{ 𝑥 𝑚 } 𝑛 ( 𝑥 , . . . , 𝑥 𝑑 ) 𝑑𝑥 . . . 𝑑𝑥 𝑚 − 𝑑𝑥 𝑚 + . . . 𝑑𝑥 𝑑 (16) 𝑚 𝑥 𝑚 𝑥 𝑛 = ∬ 𝑃 𝑠 \{ 𝑥 𝑚 ,𝑥 𝑛 } 𝑛 ( 𝑥 , . . . , 𝑥 𝑑 ) 𝑑𝑥 . . . 𝑑𝑥 𝑚 − 𝑑𝑥 𝑚 + 𝑑𝑥 𝑛 − 𝑑𝑥 𝑛 + . . . 𝑑𝑥 𝑑 , (17)which is an integration of the probability density over all the dimensions in the state space, except the ones of themarginal axes. These expressions for the computation of the marginals apply to continuous functions. However, for thecase in exam, only a discrete representation of the probability density is available through the interpolation proceduredescribed in the previous sections. The computation of the marginals is here applied to the snapshots obtained throughintegration of Eq. (3), i.e. the scattered data in the sate space at each time step separately. As the procedure is similar forboth one-dimensional and two-dimensional marginals, for a clearer explanation we focus here on the computation ofone-dimensional marginals. To do so, it is necessary to adapt Eq. (16) to scattered data. In addition, it is necessary toconsider the nature of the data of the problem and, specifically, the evolution of the state space geometry with time. Asshown for example by Fig. 6, during the re-entry process, the state space tends to suffer from a substantial deformation.We can thus obtain elongated state spaces, which are difficult to interpolate altogether, even when 𝛼 shapes are employed.In addition, this strategy can reduce memory usage issues that can arise when generating the 𝛼 -shape in high dimensions.Consequently, it was decided to overcome this issue by using a binning strategy to compute the marginals. First, weselect the axis along which the marginal must be computed, 𝑥 𝑚 . Then, this axis is subdivided into 𝑁 𝑏 equally distributedbins, whose width is given by: 𝑠 𝑏 = max ( 𝑥 𝑚 ) − min ( 𝑥 𝑚 ) 𝑁 𝑏 . (18)In this way, slices of the original set of points are constructed to obtain a set 𝐵 containing the 𝑁 𝑏 subsets of the14cattered data in which 𝑥 𝑚 is subdivided: 𝐵 = (cid:8) x ∈ 𝑉 | 𝑥 𝑏,𝑖 ≤ 𝑥 𝑚 ≤ 𝑥 𝑏,𝑖 + ∀ 𝑖 ∈ , . . . , 𝑁 𝑏 (cid:9) , (19)where 𝑥 𝑚 is the value of the 𝑚 -th coordinate of the point x and 𝑥 𝑏,𝑖 is the coordinate of the 𝑖 -th bin edge. For eachsubset of points contained in 𝐵 an 𝛼 -shape is constructed to perform the interpolation and compute the integral of thedensity relative to the considered bin. We then compute the marginal probability for each bin dividing the integral bythe width of the bin so that the marginal is given by 𝑚 𝑥 𝑚 = (cid:26) 𝑊 ( 𝐵 𝑖 ) 𝑠 𝑏 ∀ 𝐵 𝑖 ∈ 𝐵 (cid:27) , (20)where 𝑊 ( 𝐵 𝑖 ) is the integral of the probability density relative to the 𝑖 -th set of points 𝐵 𝑖 obtained as expressedby Eq. (19). At this point, it is important to outline the computational procedure for this integral. For the sake ofclarity, let us consider a two-dimensional example for which Fig. 5a shows a sample distribution of scattered datawith an elongated state space. Suppose we are interested in the marginal distribution relative to 𝑥 . For each binsubdivision, as highlighted in Fig. 5a, an alpha-shape is created using the points inside the bin. However, if we only usethe points strictly contained inside the bin, the triangulation will exclude a portion of the volume of the state space,with a consequent underestimation of the integral. Therefore, each bin is expanded using buffer widths on both sides(Fig. 5b). In this way, the expanded bin, 𝐵 (cid:48) 𝑖 , allows for a triangulation, which fully includes the original bin. x x Bin subdivisionTriangulation points (a) x x Ext. bin subdivisionBin subdivisionTriangulation points (b)
Fig. 5 Examples of 𝜶 -shape construction for a bin (a) and for an extended bin (b). Once this extended 𝛼 -shape has been created, the interpolation is performed. The interpolating points are thebarycenters of the simplices of the 𝛼 -shape so that the integral of the density inside the extended bin is15 ( 𝐵 (cid:48) 𝑖 ) = 𝑁 𝑠𝑖 ∑︁ 𝑘 = 𝑛 ( x 𝐺 𝑘 ) 𝑣 𝑘 , (21)where 𝑛 ( x 𝐺 𝑘 ) is the interpolated value of the probability density at the barycenter of the 𝑘 -th simplex 𝑥 𝐺 𝑘 asobtained using Eq. (13) of the gradient-enhanced interpolation scheme described in Section III.C, 𝑁 𝑠 𝑖 is the number ofsimplices contained inside the extended bin 𝐵 (cid:48) 𝑖 , and 𝑣 𝑘 is the volume of the 𝑘 -th simplex. The integral computed usingEq. (21) refers to the extended bin. It is thus necessary to re-scale this value. We do not know with precision the volumeof the state space contained inside the bin. However, it will be contained inside the volume of the 𝛼 -shape constructedwith the initial bin, C 𝛼 ( 𝐵 𝑖 ) (Fig. 5a), and the one constructed with the extended bin, C 𝛼 ( 𝐵 (cid:48) 𝑖 ) (Fig. 5b). Therefore, theintegral is scaled with the average between these two volumes as follows: 𝑊 ( 𝐵 𝑖 ) = 𝑊 ( 𝐵 (cid:48) 𝑖 ) Mean ( 𝑣 C 𝛼 ( 𝐵 𝑖 ) , 𝑣 C 𝛼 ( 𝐵 (cid:48) 𝑖 ) ) 𝑣 C 𝛼 ( 𝐵 (cid:48) 𝑖 ) . (22)Repeating this procedure for each bin in the set 𝐵 allows for the computation of the one-dimensional marginal. Forthe computation of the two-dimensional marginals the procedure is analogous, with the binning performed over the twodimensions of the marginals ( 𝑥 𝑚 , 𝑥 𝑛 ). IV. Test cases
This section presents a series of relevant test cases to the problem in exam, applying the propagation methodologydescribed in Section II and the reconstruction methodology outlined in Section III to planetary entry cases with differentdynamical, gravitational, and atmospheric models. In this way, we show the applicability of the continuum propagationwith models of different complexity. The results in terms of marginal distributions are compared with Monte Carlosimulations in terms of accuracy and execution time. In addition, for selected cases, the marginals of derived quantitiesof interest such as the dynamic pressure and the heat rate are computed, exploiting the already derived marginaldistributions of the propagated variables. Finally, we use the derived marginals to assess the compliance of the missionto relevant constraints.
A. Three-state steep Earth re-entry
The following section features a so-called strategic re-entry [51], which represents a vehicle with a high ballisticcoefficient on a steep, high-energy trajectory. The aim of these types of re-entry is to more precisely pinpoint the impactlocation on Earth. The test case is performed utilizing the three-state dynamics of Eq. (6). For the modeling of theatmosphere, a simple exponential model is adopted (Eq. (23)), while for the gravitational acceleration an inverse squaremodel is considered (Eq. (24)). The expressions for the two models are as follows:16 ( 𝑟 ) = 𝜌 exp (cid:26) 𝐻 − ( 𝑟 − 𝑅 𝑝 ) 𝐻 (cid:27) (23) 𝑔 ( 𝑟 ) = 𝜇 𝑝 𝑟 (24)where 𝜌 is a reference atmospheric density, 𝐻 and 𝐻 are constants related to the atmosphere of the planet, and 𝜇 𝑝 is the gravitational parameter of the planet. The initial conditions of the re-entry together with their uncertainties andthe parameters used for the models are summarized in Table 1. The uncertainties have been modeled with a Gaussiandistribution so that the values of the initial conditions of Table 1, ℎ , 𝑣 , 𝛾 , and 𝛽 represent the mean of a multivariateGaussian. Table 1 Initial conditions and relevant parameter for the three-state strategic re-entry on Earth.
State Symbol Unit 𝜇 𝜎
Initial velocity 𝑣 km / s 7.2 5Initial flight path angle 𝛾 ° -30.0 0.1Initial altitude ℎ km 125 2Ballistic coefficient 𝛽 kg / m 𝑅 𝑝 km 6378.1Earth gravitational parameter 𝜇 𝑝 m / s × Reference atmospheric density 𝜌 kg / m 𝐻 km 8.3Reference altitude 𝐻 km 0Lift-to-drag ratio 𝐶 𝐿 / 𝐶 𝐷 𝑟 , 𝑣 , 𝛾 ) andthe ballistic coefficient ( 𝛽 ). No uncertainty in the atmosphere is considered for this case so that the 𝜉 coefficient isneglected. We decided to consider the ballistic coefficient as it can influence the shape and evolution of the re-entrytrajectory and the prediction of the landing location. In addition, the exact quantification of the ballistic coefficient atre-entry can be difficult as it depends on the mass, cross-section and drag coefficient of the spacecraft. The mass of thespacecraft at disposal may not be completely known, as the exact amount of residual propellant can be difficult to assess.The cross-sectional area of the satellite may be also uncertain as well as even the exact value of the drag coefficient.Following the procedure outlined in Section II, once the initial uncertainty distribution has been defined, this is sampledand each sample is propagated using Eq. (6). In this case, we draw 750 samples. The result of the propagation using thecontinuity equation for selected snapshots (time instants, counted in seconds, after the starting epoch of the simulation)are summarized in Fig. 6.The pairplot of Fig. 6 shows the projection of the sampled points for the snapshot at time 𝑡 =
30 s. Along the main17 n
1e 54567 v [ k m / s ] [ d e g ] . . . r [km] [ k g / m ] v [km/s] [deg] . . . [kg/m ] Fig. 6 Pairplot of the probability density propagation for the strategic re-entry at two different snapshots at 𝒕 =
30 s. diagonal, the value of the density ( 𝑛 ) associated to each sample is presented for each variable: the diagonal plot in thefirst column shows the distribution of the samples in 𝑟 and the associated density value, the one in the second columnrefers to the velocity 𝑣 , the one in the third column refers to the flight-path angle 𝛾 , and the one in the last columngives the density distribution as a function of the ballistic coefficient 𝛽 . The off diagonal two-dimensional plots showthe relation between different couples of variables. The color map of the two-dimensional plots is associated to themagnitude of the density of each point. It is possible to observe how the state space, which started from a Gaussianstructure, suffered a substantial deformation as the time passed. The deformation represents one of the challenges inreconstructing the probability density for the entire state-space volume.
1. Comparison
Figs. 7a to 7c show the one-dimensional marginals obtained with the density-based approach (DB) and the MonteCarlo sampling (MC) for the snapshot at time 𝑡 =
24 s. In this comparison we have used 750 samples for the DB methodand 750 (shaded histogram) and 5000 (dashed histogram) samples for the MC method.To better compare the obtained marginal, we measure the difference between the distributions obtained with the DBand MC methods. To do so, we introduce two metrics commonly used in statistics to compare probability distributionsthat are the Hellinger distance ( Δ 𝐻 ) and the first Wasserstein distance ( Δ 𝑊 ). The Hellinger distance and the firstWasserstein distance between two PDFs P ( 𝑥 ) and Q ( 𝑥 ) are defined as follows:18 M a r g i n a l p r o b a b ili t y DBMC - N p = 750MC - N p = 5k (a) M a r g i n a l p r o b a b ili t y DBMC - N p = 750MC - N p = 5k (b)
36 32 28 24Flight path angle [deg]0.000.050.100.150.20 M a r g i n a l p r o b a b ili t y DBMC - N p = 750MC - N p = 5k (c) Fig. 7 DB vs. MC one-dimensional marginals comparison at 𝒕 =
24 s. DB: marginals with 750 samples. MC:marginals with 750 and 5000 samples. Δ 𝐻 (P , Q) = √︄ ∫ +∞−∞ (√P − √Q) 𝑑𝑥 (25) Δ 𝑊 (P , Q) = inf 𝜋 ∈ Γ (P , Q) ∫ (cid:107) 𝑥 − 𝑦 (cid:107) 𝑑𝜋 ( 𝑥, 𝑦 ) (26)where Γ (P , Q) is the collection of measures with marginals P and Q . The Hellinger distance is a form of divergencemeasuring the difference between two distributions and always lies between 0 and 1. The Wesserstein distance can beinterpreted as the amount of work needed to transform one distribution into the other. Figs. 8a and 8b show the value ofthe Hellinger and Wesserstein distance respectively as a function of time (i.e. different snapshots fitting) for the variable 𝑣 , comparing the DB and MC distributions with 750 to a reference distribution that is the MC marginal obtained with5000 samples. Snapshot instant, t s H e lli n g e r d i s t a n c e , v H
1e 1 MCDB (a)
Snapshot instant, t s W a ss e r s t e i n d i s t a n c e , v H
1e 4 MCDB (b)
Fig. 8 Variation of the Hellinger distance (a) and Wasserstein distance (b) in time for the velocity. Bothdistances compare the 750 samples marginals of the DB and MC methods to the 5000 MC samples.
It is possible to observe that the DB method shows a better performance in both metrics from the initial states untilabout 28 seconds into the re-entry. From that point on, the MC method performs slightly better. This is probably due19o the characteristics of the simulations; in fact, after 30 seconds some of the samples reach the ground and are thusno longer used in the interpolation and therefore a loss of performance can be expected (as shown by the Hellingerdistance). On the other hand, it is surprising that the MC method considerably improve in both metrics when less pointsare available. The comparison after this instant might be less accurate given that also the reference distribution obtainedwith 5000 samples suffers from a proportional reduction in the number of points available. Nonetheless, we show inTable 2, the comparison between the average value and the standard deviation over all the snapshots of the Hellinger andWesserstein distances for 𝑟 , 𝑣 , and 𝛾 for the DB and MC methods. Table 2 Comparison of mean and standard deviation of the Hellinger and Wasserstein distances for theconsidered states. 𝑟 𝑣 𝛾𝜇 Δ 𝐻 𝜎 Δ 𝐻 𝜇 Δ 𝐻 𝜎 Δ 𝐻 𝜇 Δ 𝐻 𝜎 Δ 𝐻 DB 0.1354 0.0327 0.0986 0.0313 0.0967 0.0064MC 0.1679 0.0623 0.1889 0.0840 0.1195 0.0095 𝜇 Δ 𝑊 𝜎 Δ 𝑊 𝜇 Δ 𝑊 𝜎 Δ 𝑊 𝜇 Δ 𝑊 𝜎 Δ 𝑊 DB 3 . × − . × − . × − . × − . × − . × − MC 4 . × − . × − . × − . × − . × − . × − It can be observed that both metrics are, on average, lower for the DB method as opposed to the MC method for thesame number of samples, and the standard deviation is also smaller in all instances, showing that the accuracy of theproposed method is competitive with MC using an equivalent sample size. A comparison of the computational effort forthe strategic test case is presented in Table 3 for the DB case with 750 samples and the MC case with 750 and 5000samples respectively. The computational times have been subdivided into propagation time that is the time needed topropagate the trajectories and the marginalisation time that is the time needed to reconstruct the marginal distributionfrom the scattered samples. The simulations were ran on a Windows laptop equipped with an Intel Core i7-1065G7 @1.3 GHz processor and 16 GB of RAM.
Table 3 Runtime comparison between the DB and MC methods for the strategic test case.
Case Propagation time (s) Marginalisation time (s) Total time (s)DB - Np = 750 8.2 1.2 9.4MC - Np = 750 8.2 0.01 8.21MC - Np = 5000 74.0 0.06 74.06Fig. 9 shows a visual comparison between the two-dimensional marginals in altitude and velocity for two distinctsnapshots ( 𝑡 = 𝑡 =
32 s), obtained with the DB approach using 750 samples (Figs. 9a and 9d), and the MCapproach using 750 and 5000 samples respectively (Figs. 9b, 9c, 9e and 9f).A feature that can be observed in the obtained marginals (Figs. 7 and 9) is that the DB approach tends to produce more localized distributions. On one hand, the DB methodology uses the actual value of the probability density to obtain20
Altitude [km] V e l o c i t y [ k m / s ] M a r g i n a l p r o b a b ili t y (a) DB method with 750 samples.
90 100 110
Altitude [km] V e l o c i t y [ k m / s ] M a r g i n a l p r o b a b ili t y (b) MC method with 750 samples.
80 90 100 110
Altitude [km] V e l o c i t y [ k m / s ] M a r g i n a l p r o b a b ili t y (c) MC method with 5000 samples. Altitude [km] V e l o c i t y [ k m / s ] M a r g i n a l p r o b a b ili t y (d) DB method with 750 samples.
10 20 30Altitude [km]34567 V e l o c i t y [ k m / s ] M a r g i n a l p r o b a b ili t y (e) MC method with 750 samples.
10 20 30Altitude [km]34567 V e l o c i t y [ k m / s ] M a r g i n a l p r o b a b ili t y (f) MC method with 5000 samples. Fig. 9 DB vs. MC two-dimensional marginals in 𝒉 - 𝒗 . Top row: snapshot time 𝒕 = 𝒕 =
32 s. the marginals, while Monte Carlo estimates them and may under-predict or over-predict the distributions. However, thereconstructed marginals with the DB method are only as good as the fitting procedure used and the integration performed.The marginalization procedure proposed in Section III.D together with the interpolation presented in Section III.Cintroduces some simplifications and is limited to the envelope of the state space defined by the sampled points, whichare then used to construct the 𝛼 -shape. This feature must be monitored as it may arise from a lower accuracy that can beobtained with the DB method at the boundaries of the distribution where a lower density of samples is available. Anissue that might be mitigated by a more spatially-efficient sampling technique.Fig. 10 shows the relative difference between the mean value predicted by the DB method with 750 samples and theMC method with 5000 samples at each snapshot for the three variables of interest ( 𝑟 , 𝑣 , 𝛾 ). It can be observed fromFig. 10 that the difference between the mean value as predicted by the MC and the DB methods remains below 1% forboth the radius and flight-path angle, while it increases up to 8% for the velocity in the last part of the trajectory. Thisbehavior confirms that in the last part of the trajectory, when fewer samples are available (in the last instant only 110samples out of 750 are available), the performance deteriorates as expected.Fig. 11 shows instead the difference between the standard deviation predicted by the DB method with 750 samplesand the MC method with 5000 samples for each snapshot. The results shows that the standard deviation predicted by theDB method follows the one estimated with the MC method. It also confirms what could be observed in Fig. 7 and Fig. 921
10 20 30 40
Snapshot instant, t S R e l a t i v e e rr o r r v Fig. 10 Evolution of the relative difference between the expected value predicted by the DB method (750samples) and MC method (5000 samples) for 𝒓 , 𝒗 , and 𝜸 . with the DB method tendency to underpredict the tails of the distributions. Snapshot instant, t S S t a n d a r d d e v i a t i o n (a) Snapshot instant, t S S t a n d a r d d e v i a t i o n (b) Snapshot instant, t S S t a n d a r d d e v i a t i o n
1e 2 MCDB (c)
Fig. 11 Evolution of the standard deviation predicted by the DB method (750 samples) and MC method (5000samples) for 𝒓 (a), 𝒗 (b), and 𝜸 (c).
2. Mechanical and thermal loads compliance
During re-entry assessments it is not only of interest to predict the uncertainties in the position and velocity of thespacecraft, but also to understand how they transfer to other relevant quantities such as the dynamic pressure and theheat rate. We compute these quantities using the following expressions:¯ 𝑞 = 𝜌 ( ℎ ) 𝑣 (27) (cid:164) 𝑄 = ¯ 𝐹 q 𝐾 √︂ . 𝑟 𝑛 √︄ 𝜌 ( ℎ ) 𝜌 SL (cid:18) 𝑣 . (cid:19) . , (28)where 𝐾 = . × W / m is a constant, 𝑟 𝑛 is the curvature radius at stagnation point in meters, 𝜌 SL is the airdensity at sea-level in kg / m , and ¯ 𝐹 q is an averaging factor that depends on the shape of the object, its motion, and the22e-entry regime. The expression for the heat rate is a simplified version of the Detra-Kemp-Riddell correlation (DKR)[52], where we have omitted the term related to the stagnation point enthalpy. The DKR correlation is commonly usedto predict the heat rate for hypersonic entries in Earth atmosphere [13, 53, 54]. It is applicable to continuum flows sothat it is less accurate at higher altitudes; however, for this demonstrative example it has been used to compute the heatrate for the entire trajectory.If we select the characteristics of the spacecraft, i.e. the curvature radius at the stagnation point and the averagingfactor, both Eqs. (27) and (28) are only a function of the altitude and the velocity. We can thus exploit the previouslycomputed marginal distributions in ℎ and 𝑣 to directly obtain information on the dynamic pressure and the heat rateduring re-entry. In fact, we can apply a transformation and scale the density accordingly. For example, in the case of theheat rate we have: 𝜑 ( ℎ, 𝑣 ) = 𝑣𝜓 ( ℎ, 𝑣 ) = ¯ 𝐹 q 𝐾 √︃ . 𝑟 𝑛 √︃ 𝜌 ( ℎ ) 𝜌 SL (cid:0) 𝑣 . (cid:1) . . (29)When we apply this transformation, we need to scale the density accordingly, which can be accomplished by dividingthe density in the original variables by the determinant of the Jacobian of the transformation as follows: 𝑛 ( 𝑣, (cid:164) 𝑄 ) = 𝑛 ( ℎ, 𝑣 ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) 𝜕𝜑𝜕ℎ 𝜕𝜑𝜕𝑣𝜕𝜓𝜕ℎ 𝜕𝜓𝜕𝑣 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − . (30)With an equivalent procedure, we can also obtain the marginals for the dynamic pressure. Figs. 12a and 12b showexamples for both the dynamic pressure and the heat rate for the test case in exam. The heat rate has been obtainedconsidering a spherical shape with a one meter radius. The corresponding averaging factor is ¯ 𝐹 q = .
234 [14].These two-dimensional marginals can then be integrated along 𝑣 to obtain the one-dimensional marginals for thedynamic pressure and heat rate. We can combine them together and obtain the evolution in time of the marginalprobability for both the dynamic pressure and the heat rate. Figs. 13a and 13b show this time evolution. Each plot alsofeatures a threshold (in red). These values can be set by the user and defined to check different scenarios. For example,the limits in dynamic pressure and heat rate that a re-entry capsule can withstand. For this test case, a threshold of 68kPa and 77 W / cm [55] for the dynamic pressure and heat rate, respectively. The top part of the plot, represents theprobability at each time step of crossing the limit. Initially, no part of the dynamic pressure and heat rate distributionscrosses the thresholds. As time passes, the distributions shift towards higher dynamic pressures and heat rates so thatparts of the distribution cross the threshold, increasing the probability. In the final states, the probability density starts toreduce again. This happens because some of the initial samples reach the ground before others and when they do, theircontribution to the total probability weight is removed. 23 a) (b) Fig. 12 Two-dimensional DB marginals of velocity vs. dynamic pressure (a) and velocity vs heat rate (b) attime 𝒕 =
32 s. L o g p [ k P a ] p threshold: 68.0 [kPa]
01 Probability of exceeding the threshold 0.000.400.801.201.602.002.402.803.20 M a r g i n a l p r o b a b ili t y (a) L o g q [ W / c m ] q threshold: 77.0 [W/cm ]
01 Probability of exceeding the threshold 0.000.501.001.502.002.503.003.504.004.50 M a r g i n a l p r o b a b ili t y (b) Fig. 13 Dynamic pressure (a) and heat rate (b) marginal distributions evolution in time. In red, specificthresholds. On top, the probability in time of crossing the threshold.B. Six-state steep Earth re-entry
The example presented in the following section is an extension of the test case in Section IV.A in which a six-statepropagation is used instead (Eq. (7)) and the gravity and atmosphere models have an increased complexity. Thegravitational model used takes into account the J2 effect [56] and the atmospheric model is the 1976 US StandardAtmosphere [57]. In order to use the US Standard Atmosphere model into the continuity equation, a spline interpolationof the density has been performed using the
UnivariateSpline class of the scipy
Python package. The
UnivariateSpline class has the advantage of providing the derivative information that can be used directly into the equations of motionwhen needed. As previously mentioned, the uncertainties in the atmosphere can be included in the continuum approach.The procedure adopted here is to introduce a correction coefficient to scale the value of the density provided by the24elected model as follows: 𝜌 ( 𝑟, 𝜉 ) = 𝜉 ˆ 𝜌 std76 ( 𝑟 ) , (31)where 𝜉 is the atmospheric correction coefficient and ˆ 𝜌 std76 ( 𝑟 ) is the spline representation of the atmospheric densityas provided by the 1976 US Standard Atmosphere model. More complex models, with several uncertain coefficients canbe included, as long as they can be expressed with differentiable functions and that they are added to the augmented statespace during the continuum propagation. For the case in exam, the uncertainties have been considered in all the initialstates ( 𝜆 , 𝜑 , 𝑣 , 𝛾 , 𝜒 ) and in the atmospheric correction coefficient ( 𝜉 ), while no uncertainty in the ballistic coefficienthas been included. Table 4 summarizes the mean value and standard deviation for the considered uncertain states andthe value of the remaining parameters necessary for the re-entry simulation. Table 4 Mean and standard deviation for the initial conditions of the augmented state space and values of therelevant parameters.
State Symbol Unit 𝜇 𝜎
Initial longitude 𝜆 ° 𝜑 ° 𝑣 km / s 7.2 0.036Initial flight-path angle 𝛾 ° -30 0.15Initial heading angle 𝜒 °
45 0.225Atmospheric correction 𝜉 ℎ km 125Ballistic coefficient 𝛽 kg / m 𝛼 m / kg 0As it is possible to observe, in this case the uncertainties in the initial conditions have a smaller and more realisticstandard deviation with respect to the ones presented in Table 1. The largest uncertainty is represented by the atmosphericcorrection coefficient, which introduces a 5% standard deviation in the value of the atmospheric density. The uncertaintyin the radial position has been removed as it is now the independent variable of the integration. This can allow setting aradius (altitude) value for the re-entry interface and exclude the radius from the uncertainties. The presented test case isintended to show the applicability of the continuum propagation to a realistic re-entry scenario, which considers the fulldynamical representation, includes more realistic and accurate gravitational and atmospheric models, and takes intoaccount the uncertainty in the atmospheric properties. The six-dimensional distribution of the initial uncertainty isthen sampled and each sample is propagated until the surface of the Earth is reached. Fig. 14 shows the projections ofthe distribution of 1000 sampled points at the end of the propagation (Earth impact). For the sake of readability, thedistribution in the atmospheric correction coefficient 𝜉 is omitted; however, as it is a coefficient, its derivative is zero25Eq. (7)) and the distribution at the final instant is the same as the initial one. On the main diagonal we can observe themagnitude of the probability density associated to each sampled point. n [ d e g ] v [ k m / s ] [ d e g ] [deg] [ d e g ] [deg] v [km/s] [deg] [deg] Fig. 14 Landing snapshot for the six-state steep entry test case.
It is interesting to observe how the distribution for this six-state variation of the strategic test case is considerablymore regular and maintains a more Gaussian-like behavior throughout the re-entry process when compared to itsthree-state counterpart (Fig. 6). The difference is due to the different uncertainties in the initial conditions, which aremuch more pronounced for the test case in Section IV.A. The different independent variable (from 𝑡 to 𝑟 ) may also havecontributed to the different shape of the state space.
1. Comparison
Fig. 15 shows the comparison between the one-dimensional marginals obtained with the DB and the MC methodsfor the relevant state variables. The comparison is between the DB marginals obtained with 1000 samples, and thecorresponding MC marginals obtained with 1000 (the green shaded histogram) and 50000 samples (the black dashedhistogram) for the final propagation instant, which correspond to landing.The marginals show a similar behavior with respect to their three-state counterparts (Fig. 7): the distributionsobtained with 1000 Monte Carlo samples show a more erratic behavior for all the states, especially in the areas around26 M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (a) M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (b) M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (c) M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (d) M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (e) Fig. 15 DB vs. MC one-dimensional marginals comparison at landing instant for 1000 samples and 50000samples. of the peak of the distribution, while the corresponding DB marginals show a more regular behavior. However, even inthis case, it is possible to observe the feature of the DB marginals when compared to the 50000 Monte Carlo that is thepresence of peaks and smaller tails, which in turn results in a smaller standard deviation in the prediction of the statesat the landing instant. To better compare the quality of the distribution reconstruction with both the DB and the MCmethods, a comparison using the metrics introduced in Section IV.A (Hellinger distance and Wasserstein distance)is performed. Fig. 16 shows this comparison for three selected states (latitude, velocity, and heading angle). In thecomparison the distances have been computed taking as reference distribution the Monte Carlo simulation with 50000samples. In case of the DB methodology, the results for 50000 samples are not present as the memory requirementswere too high for the laptop used in this study.The results of the comparison show that the marginals obtained with the DB method consistently perform better withrespect to their MC counterparts with the same number of samples according to both the proposed metrics. However,some notable features can be identified: the Hellinger distance shows in all three comparisons a worsening in theperformance of the DB method when passing from 5000 to 10000 samples. This feature is confirmed also by theWasserstein distance for the latitude marginal. The reason behind this behavior probably resides in the methodologyused to obtain the marginal distribution as outlined in Section III.D when combined with the increasing requirements for27 Number of samples H e lli n g e r d i s t a n c e , H
1e 1 MCDB (a) Hellinger distance for 𝝋 W a ss e r s t e i n d i s t a n c e , W MCDB
Number of samples (b) Wasserstein distance for 𝝋 H e lli n g e r d i s t a n c e , v H
1e 1 MCDB
Number of samples (c) Hellinger distance for 𝒗 W a ss e r s t e i n d i s t a n c e , v W
1e 4 MCDB
Number of samples (d) Wasserstein distance for 𝒗 H e lli n g e r d i s t a n c e , H
1e 1 MCDB
Number of samples (e) Hellinger distance for 𝝌 W a ss e r s t e i n d i s t a n c e , W Number of samples (f) Wasserstein distance for 𝝌 Fig. 16 Variation of the Hellinger and Wasserstein distances between the DB and MC methods as a functionof the number of samples for the landing snapshot. computational resources due to the growing number of points: in fact, to limit the memory usage a higher numberof bins has been used, leading to a lesser performance in the estimation of the marginals. On the other hand, theWasserstein distance (Figs. 16b, 16d and 16f) shows that the DB marginals obtained with 1000 samples have a betteror comparable result with respect to the 10000 MC simulation. A similar trend can be observed in two out of threecomparisons of the Hellinger distance, where the heading angle case has a more pronounced difference between the1000 DB and the 10000 MC simulations. Another interesting aspect, when comparing the results of Fig. 16 with theresults of the previous test case Fig. 8 is that in the last instant of the six-state case the DB method still performs betterthen the MC, while for the three-state case it has a worse performance. This feature is probably related to the fact the in28his case, by introducing the radius as independent variable, all the samples are maintained in the snapshot until the end,while in the case of Section IV.A they are removed when they reach the ground, thus reducing the number of samplesavailable for the fitting. A final comparison is provided for the run-time of the code as a function of the number ofpoints as shown in Fig. 17. The run-time has been normalized with respect to the run-time of the 50000 MC simulationsso that all the other figures are a fraction of this reference time. Similarly to Section IV.A.1, the run-times include thepropagation time of the trajectory and the fitting time. As expected, for the same number of points, the DB method isalways more time consuming as the fitting procedure requires more time than the histogram generation. However, it isalso possible to observe that the DB method with 1000 samples requires about one-fourth and one-seventh of the timerequired by the 5000 and 10000 samples MC method respectively, with a performance that is comparable or worse thanthe one of the DB method, as shown in Fig. 16. In addition, the Fig. 17 shows the ratio between the marginalizationtime and the propagation time for the DB method, which oscillates between 40% and 60%. N o r m a li s e d r un t i m e , t R ( s ) MC DB t DB,int t DB,prop
Number of samples
Fig. 17 Run-time comparison between the DB and MC methodology. In orange, the fraction between theinterpolation and the propagation times for the DB method.C. Six-state Mars re-entry
The third test case features a six-state Mars re-entry, whose propagation is again performed using Eq. (7). Theinitial conditions and the other relevant re-entry parameters are based on the Mars MSL mission [26, 58] and they aresummarized in Table 5. The modified lift coefficient, 𝛼 = .
001 m / kg, corresponds to a 𝐶 𝐿 = .
18 of a spacecraftwith mass 𝑚 = 𝑆 = . able 5 Mean and standard deviation for the initial conditions of the augmented state space and values of therelevant parameters. State Symbol Unit 𝜇 𝜎
Initial longitude 𝜆 ° -90.07 0.5Initial latitude 𝜑 ° -43.9 0.5Initial velocity 𝑣 km / s 5.505 0.004Initial flight-path angle 𝛾 ° -14.15 0.023Ballistic coefficient 𝛽 kg / m 𝜉 ℎ km 126Initial heading angle 𝜒 ° 𝛼 m / kg 0.001Mars equatorial radius 𝑅 𝑝 km 3397Mars gravitational parameter 𝜇 𝑝 m / s × Mars rotational rate 𝜔 𝑝 rad / s 7.095 × − 𝜌 ( ℎ ) = exp (cid:8) 𝑐 + 𝑐 ℎ + 𝑐 ℎ + 𝑐 ℎ + 𝑐 ℎ (cid:9) , (32)where 𝑐 = − . 𝑐 = − . × − , 𝑐 = − . × − , 𝑐 = − . × − , and 𝑐 = . × − . Again,the uncertainty in the atmospheric density is implemented following the procedure outlined by Eq. (31), introducingthe atmospheric correction coefficient, 𝜉 . Fig. 18 shows the projections of the snapshot at the landing instant for theMars test case, where we have excluded the ballistic coefficient for a better readability. We can observe that, differentlyfrom the previous test case (Fig. 14), the effect of the nonlinear dynamic is more pronounced and this translates into amore deformed state space, particularly in the coupling between the velocity, the flight-path angle and the atmosphericcorrection coefficient (the ballistic coefficient has a similar dependency).
1. Comparison
Fig. 19 show the one-dimensional marginals obtained with the DB and MC method for the velocity and the flight-pathangle at the landing instant. The DB marginal has been obtained with 1000 samples, while the MC method shows theresults obtained with 1000 samples (green shaded histogram) and 50000 samples (black dashed histogram). The resultsshow a good agreement of the DB method with the 50000 samples MC simulation taken as a reference. However, it isstill visible the discrepancy around the tails of the distribution, with a more erratic behavior and a tendency to reproducelower values of the marginal probability, as already seen in Figs. 7 and 15. Fig. 20 instead shows the comparisonbetween the two-dimensional marginals obtained with the DB and MC methods for the landing location coordinatesexpressed in longitude and latitude. 30 n [ d e g ] v [ k m / s ] [ d e g ] [ k g / m ] [deg] [ - ] [deg] . . v [km/s] [deg] [kg/m ] . . [-] Fig. 18 Pairplot of the landing snapshot for the six-state unguided Mars entry test case. M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (a)
42 39 36 33 30Flight path angle [deg]0.000.050.100.150.200.250.30 M a r g i n a l p r o b a b ili t y DBMC - N p = 1kMC - N p = 50k (b) Fig. 19 DB vs. MC one-dimensional marginals comparison at landing instant for the Mars re-entry test case.DB method with 1000 samples and MC method with 1000 and 50000 samples. Longitude [deg] L a t i t u d e [ d e g ] M a r g i n a l p r o b a b ili t y (a)
90 89 88Longitude [deg]33.533.032.532.031.531.0 L a t i t u d e [ d e g ] M a r g i n a l p r o b a b ili t y (b)
90 89 88Longitude [deg]33.533.032.532.031.531.0 L a t i t u d e [ d e g ] M a r g i n a l p r o b a b ili t y (c) Fig. 20 DB vs. MC two-dimensional 𝝀 - 𝝋 marginals comparison at landing instant for the Mars re-entry testcase. DB method with 2000 samples (a) and MC method with 2000 (b) and 50000 samples (c). A quantitative comparison between the obtained marginals is again performed exploiting the metrics introduced inSection IV.A, i.e. the Hellinger and Wasserstein distances. The comparison is performed computing the average andstandard deviation of the Hellinger and Wasserstein distances over time for different number of sample points. Thedistances are computed taking as reference the results obtained with the MC simulation with 50000 samples. Fig. 21shows this comparison for the velocity and the flight-path angle as a function of the number of points. The solid linesrepresent the mean value and the shaded area the standard deviation. The Hellinger distance (Figs. 21a and 21c) shows aregular behavior in both the considered variables for the DB method, with a constant improvement in performancewhen going from 500 to 5000 points. A worsening of the approximation happens instead for 10000 points, as it was thecase for the Earth re-entry test case (Fig. 16). In addition, it is possible to observe that, on average, the DB methodshows a better performance with respect to the MC method, except for the 10000 samples case, and a smaller standarddeviation. The Wasserstein distance (Figs. 21b and 21d) shows a similar behavior up to 5000 samples, where the DBmethod performs better than the MC. However, after the 5000 samples, a worsening can be observed, especially in thecase of the flight-path angle. It is also noticeable a considerably higher standard deviation for both the DB and MCmethods. Contrary to the Hellinger distance, the Wesserstein distance is not a value ranging between zero and one, andits value changes through time alongside the change in magnitude of the probability density. For this reason, in thiscomparison the value at each snapshot of Δ 𝑊 has been normalized so that the average over time could be performed.The results obtained for the two-dimensional marginals can also be compared using the Hellinger distance and areshown in Table 6. The results shows how the Hellinger distance for the DB method with 1000 samples is comparablewith the ones of the MC method with 10000 samples, confirm the trend observed in the one-dimensional marginals andin previous test cases. A similar trend is also observed in the performance of the DB method, with an improvementpassing from 1000 to 5000 samples, and then a worsening at 10000 samples.Again, Fig. 22 provides a final comparison between the simulation times for the DB and MC methods. Also in thiscase the computational times have been normalized with respect to the 50000 MC simulation. As expected, the behavior32 H e lli n g e r d i s t a n c e , v H MCDB
Number of samples (a) Hellinger distance for 𝒗 W a ss e r s t e i n d i s t a n c e , v W MCDB
Number of samples (b) Wasserstein distance for 𝒗 H e lli n g e r d i s t a n c e , H MCDB
Number of samples (c) Hellinger distance for 𝜸 W a ss e r s t e i n d i s t a n c e , W MCDB
Number of samples (d) Wasserstein distance for 𝜸 Fig. 21 Comparison of the average Hellinger and Wasserstein distances as a function of the number of samplesfor the variables 𝒗 and 𝜸 .Table 6 Comparison of Hellinger distance for the 𝝀 - 𝝋 marginal at landing between DB and MC methods. Number of pointsMethod 1000 2000 5000 10000DB 0.12518 0.10061 0.08712 0.0983MC 0.27565 0.25616 0.16781 0.11713closely matches the one of Section IV.B, while the fraction of computational time devoted to the marginalizationcompared to the propagation shows a slightly less stable trend: higher fractions of the computational time is occupied bythe marginalization procedure for smaller number of samples.
2. Parachute deployment probability
Similarly to the test case of Section IV.A, where we analyzed the probability of crossing the thresholds of dynamicpressure and heat rate, in this case, we consider a relevant problem in planetary entry (particularly on Mars) that is thedeployment of a parachute to decelerate the probe in the final stage of the re-entry. For the parachute deployment, themain variables taken into account are the dynamic pressure and the Mach number. Particularly, the dynamic pressure islimited to 220 ≤ ¯ 𝑞 ≤
880 N / m , and the Mach number to 1 . ≤ 𝑀 ≤ . N o r m a li s e d r un t i m e , t R ( s ) MC DB t DB,int t DB,prop
Number of samples
Fig. 22 Run-time comparison between the DB and MC methodology. In orange, the fraction between theinterpolation and the propagation times for the DB method. again using Eq. (27), while the Mach number has the following expression: 𝑀 = 𝑣𝑣 𝑠 , (33)where 𝑣 𝑠 is the speed of sound in martian atmosphere. Similarly to the atmospheric density profile, the speed ofsound is considered only a function of the altitude and is obtained from a fitting of MSL data. The expression of thespeed of sound as a function of the altitude is the following [26]: 𝑣 𝑠 ( ℎ ) = . + 𝑐 ℎ + 𝑐 ℎ + 𝑐 ℎ , (34)where 𝑐 = − . × − , 𝑐 = − . × − , and 𝑐 = . × − . Eqs. (27) and (33) and the deploymentboundaries in Mach number and dynamic pressure previously introduced can be combined to obtain the velocity limitswithin which both the limitations are satisfied. Using these boundaries with the one-dimensional marginals in velocitywe can find the probability of being compliant with the parachute deployment limitations by integrating the velocitymarginal within the boundaries. The procedure can be repeated for different snapshots to obtain the evolution of thecompliance probability as a function of the altitude. Fig. 23 shows the evolution of the compliance probability for thefinal portion of the trajectory. It can be noticed a full compliance within the altitude range 3 km and 7 km.Mission planning and design for planetary entries can benefit from such analyses has they give a robust procedurefor selecting the parachute deployment time given the state of the spacecraft under the effect of initial uncertainties. V. Conclusion and discussion
This work presents a novel methodology to propagate uncertainties using a continuum approach and a procedure toreconstruct these uncertainties and obtain the marginal probabilities of interest. The propagation methodology has been34
Altitude (km) D e p l o y m e n t c o m p li a n c e Fig. 23 Parachute deployment compliance probability as a function of the altitude. Only the final part of thetrajectory is shown. applied to relevant re-entry dynamics including both three-state and six-state dynamics and proved capable of includinguncertainties in the state variables and in relevant parameters such as the ballistic coefficient and the atmospheric density.The procedure to include this uncertainties, however, requires an extension of the state space of the ODE so that theinclusion of a large number of uncertainty may result difficult to achieve at the current stage of development. Thereconstruction methodology has been applied to representative test cases to derive the probability marginals for thequantities of interest. The test cases were representative of four-dimensional and six-dimensional state spaces anddynamics with different level of nonlinearity. The DB method showed the capability to obtain the marginal distributionup to a six-dimensional state space that also showed a substantial nonlinear behavior (Figs. 6 and 18). The results werecompared with the corresponding Monte Carlo simulations for different number of samples, both in terms of differencebetween the obtained marginal distributions and simulation times. For the test case of Section IV.A, the DB methodwith 750 samples shows better performances in terms of the Hellinger and Wasserstein distances with respect to the MCmethod for the first part of the re-entry trajectory until about 28 seconds into the descent (Fig. 8). Instead, in the last part,the performance of the MC method slightly surpasses the one of the DB method, reaching a comparable performance atthe final instant. For the test case of Section IV.B, the DB method results in a better performance with respect to the MCsimulations with an equivalent number of samples, except for the case with 10000 samples for the latitude variable(Figs. 16a and 16b). The DB method also shows a similar or better Wasserstein distance with 1000 samples whencompared with the MC method with 5000 and 10000 samples, while it has a worst performance when 750 samplesare considered. The same trend can also be observed for the Hellinger distance for the latitude and velocity variables(Figs. 16a and 16c), but not for the heading angle (Fig. 16e). For the final test case of Section IV.C, the DB methodshowed a better or comparable performance than the MC method in approximating the two-dimensional marginalsas showed in Table 6. For the one-dimensional marginals, the DB method showed lower Hellinger distances whencompared to the MC method with the same number of sample, except for the case with 10000 samples (Fig. 21). In35ddition, the DB method with as low as 750 samples shows, on average, a lower Hellinger distance than the MC methodup to 5000 samples (Figs. 21a and 21c). For the same comparison over a one-dimensional marginal, the Wassersteindistance shows a similar behaviour, with the DB method performing ,on average, better than the MC method for numberof samples ranging from 750 to 5000 and instead losing the comparison for 10000 samples (Figs. 21b and 21d). Thecomputational time of the DB method is always greater than the MC when the same number of samples are used,because the marginalization procedure requires more computational time (Figs. 17 and 22). However, this increase intime can, in some cases, be offset by using a lower number of samples to obtain a comparable level of approximation.The visual inspection of the obtained marginal probabilities showed a trend of the DB method to underpredict the tailsof the distributions with respect to the MC method as confirmed by the results of Fig. 11. This trend may be due to thelower number of samples available in these regions of the distributions, thus producing a less accurate reconstruction. Inaddition, another trend resulting from the presented analyses is the loss of performance of the DB method when a largernumber of samples is used (i.e. 10000 samples in this work). This is a limitation of the introduced marginalizationtechnique, which requires the definition of bins; if these bins are too small, the volume approximated by the triangulationmay be inaccurate. However, a finer binning is usually required when the number of samples points increases (especiallyin higher dimensions) to manage the memory usage of the triangulation algorithm. This can limit the applicability ofthe reconstruction methodology to high-dimensional spaces. Still, the methodology has good performances when lowernumber of samples are used, and this is also where it is most applicable and appealing.
Funding Sources
References [1] Hoelscher, B., “Orion Entry, Descent and Landing Simulation,”
AIAA Guidance, Navigation and Control Conference andExhibit , 2007, p. 6428.[2] Rea, J., and Putnam, Z., “A comparison of two Orion skip entry guidance algorithms,”
AIAA Guidance, navigation and controlconference and exhibit , 2007, p. 6424.[3] Putnam, Z., Bairstow, S., Braun, R., and Barton, G., “Improving lunar return entry range capability using enhanced skiptrajectory guidance,”
Journal of Spacecraft and Rockets , Vol. 45, No. 2, 2008, pp. 309–315. https://doi.org/10.2514/1.27616.[4] Brunner, C. W., and Lu, P., “Skip entry trajectory planning and guidance,”
Journal of Guidance, Control, and Dynamics ,Vol. 31, No. 5, 2008, pp. 1210–1219.
5] Putnam, Z. R., Grant, M. J., Kelly, J. R., Braun, R. D., and Krevor, Z. C., “Feasibility of guided entry for a crewed liftingbody without angle-of-attack control,”
Journal of Guidance, Control, and Dynamics , Vol. 37, No. 3, 2014, pp. 729–740.https://doi.org/10.2514/1.62214.[6] Spencer, D. A., and Braun, R. D., “Mars Pathfinder atmospheric entry-Trajectory design and dispersion analysis,”
Journal ofSpacecraft and Rockets , Vol. 33, No. 5, 1996, pp. 670–676. https://doi.org/10.2514/3.26819.[7] Striepe, S. A., Way, D., Dwyer, A., and Balaram, J., “Mars science laboratory simulations for entry, descent, and landing,”
Journal of Spacecraft and Rockets , Vol. 43, No. 2, 2006, pp. 311–323. https://doi.org/10.2514/1.19649.[8] Nelessen, A., Sackier, C., Clark, I., Brugarolas, P., Villar, G., Chen, A., Stehura, A., Otero, R., Stilley, E., Way, D., et al.,“Mars 2020 Entry, Descent, and Landing System Overview,” , IEEE, 2019, pp. 1–20.https://doi.org/10.1109/AERO.2019.8742167.[9] Pardini, C., and Anselmo, L., “Re-entry predictions for uncontrolled satellites: results and challenges,” , Montreal, Canada, 2013.[10] O’Connor, B., “Handbook for Limiting Orbital Debris. NASA Handbook 8719.14,”
National Aeronautics and SpaceAdministration, Washington, DC , 2008.[11] Dobarco-Otero, J., Smith, R., Bledsoe, K., Delaune, R., Rochelle, W., and Johnson, N., “The Object Reentry Survival AnalysisTool (ORSAT)-version 6.0 and its application to spacecraft entry,”
Proceedings of the 56th Congress of the InternationalAstronautical Federation, the International Academy of Astronautics, and International Institute of Space Law, IAC-05-B6 ,Vol. 3, 2005, pp. 17–21.[12] Koppenwallner, G., Fritsche, B., Lips, T., and Klinkrad, H., “Scarab-a Multi-Disciplinary Code for Destruction Analysis ofSpace-Craft during Re-Entry,”
Fifth European Symposium on Aerothermodynamics for Space Vehicles , Vol. 563, 2005, p. 281.[13] Gelhaus, J., Kebschull, C., Braun, V., Sanchez-Ortiz, N., Parilla Endrino, E., de Oliveira, J. C., and Dominguez-Gonzalez, R.,“Upgrade of ESA’s Space Debris Mitigation Analysis Tool Suite,” Tech. Rep. ESA contract 4000104977/11/D/SR, EuropeanSpace Agency, 2014.[14] Trisolini, M., Lewis, H. G., and Colombo, C., “Demisability and survivability sensitivity to design-for-demise techniques,”
Acta Astronautica , Vol. 145, 2018, pp. 357–384. https://doi.org/10.1016/j.actaastro.2018.01.050.[15] Niederreiter, H.,
Random number generation and quasi-Monte Carlo methods , Vol. 63, Siam, 1992.[16] Luo, Y.-z., and Yang, Z., “A review of uncertainty propagation in orbital mechanics,”
Progress in Aerospace Sciences , Vol. 89,2017, pp. 23–39. https://doi.org/10.1016/j.paerosci.2016.12.002.[17] Cameron, J. M., Jain, A., Burkhart, P. D., Bailey, E. S., Balaram, B., Bonfiglio, E., Ivanov, M., Benito, J., Sklyanskiy, E., andStrauss, W., “DSENDS: multi-mission flight dynamics simulator for NASA missions,”
AIAA SPACE 2016 , 2016, p. 5421.https://doi.org/10.2514/6.2016-5421.
18] Hanson, J. M., and Jones, R. E., “Test results for entry guidance methods for space vehicles,”
Journal of Guidance, Control,and Dynamics , Vol. 27, No. 6, 2004, pp. 960–966. https://doi.org/10.2514/1.10886.[19] Lu, P., “Predictor-corrector entry guidance for low-lifting vehicles,”
Journal of Guidance, Control, and Dynamics , Vol. 31,No. 4, 2008, pp. 1067–1075. https://doi.org/10.2514/1.32055.[20] Lu, P., Brunner, C. W., Stachowiak, S. J., Mendeck, G. F., Tigges, M. A., and Cerimele, C. J., “Verification of a fullynumerical entry guidance algorithm,”
Journal of Guidance, Control, and Dynamics , Vol. 40, No. 2, 2017, pp. 230–247.https://doi.org/10.2514/1.G000327.[21] Mehta, P. M., Kubicek, M., Minisci, E., and Vasile, M., “Sensitivity analysis and probabilistic re-entry modeling for debrisusing high dimensional model representation based uncertainty treatment,”
Advances in Space Research , Vol. 59, No. 1, 2017,pp. 193–211. https://doi.org/10.1016/j.asr.2016.08.032.[22] Sanson, F. J., Bertorello, C., Bouilly, J.-M., and Congedo, P., “Space debris reentry prediction and ground risk estimation usinga probabilistic breakup model,”
AIAA Scitech 2019 Forum , 2019, p. 2234. https://doi.org/10.2514/6.2019-2234.[23] Dell’Elce, L., and Kerschen, G., “Probabilistic assessment of lifetime of low-earth-orbit spacecraft: uncertainty propagationand sensitivity analysis,”
Journal of Guidance, Control, and Dynamics , Vol. 38, No. 5, 2015, pp. 886–899. https://doi.org/10.2514/1.G000149.[24] Prabhakar, A., Fisher, J., and Bhattacharya, R., “Polynomial chaos-based analysis of probabilistic uncertainty in hypersonic flightdynamics,”
Journal of guidance, control, and dynamics , Vol. 33, No. 1, 2010, pp. 222–234. https://doi.org/10.2514/1.41551.[25] Jones, B. A., and Doostan, A., “Satellite collision probability estimation using polynomial chaos expansions,”
Advances inSpace Research , Vol. 52, No. 11, 2013, pp. 1860–1875. https://doi.org/10.1016/j.asr.2013.08.027.[26] Lunghi, P., Armellin, R., Di Lizia, P., Mease, K. D., and Lavagna, M. R., “Atmospheric entry guidance based on DifferentialAlgebra for high elevation Mars landing,” , 2018, p. 1458. https://doi.org/10.2514/6.2018-1458.[27] Chandrasekhar, S. S.,
Principles of stellar dynamics , Dover Publications, 1943.[28] Gor’kavyi, N. N., Ozernoy, L. M., and Mather, J. C., “A New Approach to Dynamical Evolution of Interplanetary Dust,”
TheAstrophysical Journal , Vol. 474, No. 1, 1997, pp. 496–502. https://doi.org/10.1086/303440.[29] McInnes, C. R., “Autonomous ring formation for a planar constellation of satellites,”
Journal of Guidance, Control, andDynamics , Vol. 18, No. 5, 1995, pp. 1215–1217. https://doi.org/10.2514/3.21531.[30] Letizia, F., “Extension of the Density Approach for Debris Cloud Propagation,”
Journal of Guidance, Control, and Dynamics ,Vol. 41, No. 12, 2018, pp. 2651–2657. https://doi.org/10.2514/1.G003675.[31] Frey, S., Colombo, C., and Lemmens, S., “Interpolation and integration of phase space density for estimation of fragmentationcloud distribution,”
Proceedings of the 29th AAS/AIAA Space Flight Mechanics Meeting , 2019.
32] Halder, A., and Bhattacharya, R., “Dispersion Analysis in Hypersonic Flight During Planetary Entry Using Stochastic LiouvilleEquation,”
Journal of Guidance, Control, and Dynamics , Vol. 34, No. 2, 2011, pp. 459–474. https://doi.org/10.2514/1.51196.[33] Trisolini, M., and Colombo, C., “A density-based approach to the propagation of re-entry uncertainties,” , Ka’anapali, Hawaii, USA, 2019, pp. 1–13.[34] Trisolini, M., and Colombo, C., “Modeling re-entry break-up uncertainties with continuity equation and Gaussian mixturemodels interpolation,”
Proceedings of the 2020 AAS/AIAA Astrodynamics Specialist Conference , 2020, pp. AAS 20–636.[35] Limonta, S., Trisolini, M., Frey, S., and Colombo, C., “Modelling the break-up and re-entry propagation of meteorites through acontinuum approach,”
Proceedings of the 71st International Astronautical Congress , 2020, pp. 12–14.[36] Hoogendoorn, R., Mooij, E., and Geul, J., “Uncertainty propagation for statistical impact prediction of space debris,”
Advancesin Space Research , Vol. 61, No. 1, 2018, pp. 167–181. https://doi.org/10.1016/j.asr.2017.10.009.[37] Stoer, J., and Bulirsch, R., “Interpolation,”
Introduction to Numerical Analysis , Springer New York, New York, NY, 2002, pp.37–144. https://doi.org/10.1007/978-0-387-21738-3_2.[38] Edelsbrunner, H., and Mücke, E. P., “Three-Dimensional Alpha Shapes,”
ACM Transactions on Graphics , Vol. 13, No. 1, 1994,pp. 43–72. https://doi.org/10.1145/174462.156635.[39] Kraaijpoel, D., and Van Leeuwen, T., “Raising the order of multivariate approximation schemes using supplementary derivativedata,”
Procedia Computer Science , Vol. 1, Elsevier B.V., 2010, pp. 307–316. https://doi.org/10.1016/j.procs.2010.04.034.[40] Awanou, G., Lai, M.-J., and Wenston, P., “The multivariate spline method for scattered data fitting and numerical solutions ofpartial differential equations,”
Wavelets and splines: Athens , 2005, pp. 24–74.[41] Alfeld, P., “Scattered data interpolation in three or more variables,”
Mathematical methods in computer aided geometric design ,Elsevier, 1989, pp. 1–33. https://doi.org/10.1016/B978-0-12-460515-2.50005-6.[42] Barthelmann, V., Novak, E., and Ritter, K., “High dimensional polynomial interpolation on sparse grids,”
Advances inComputational Mathematics , Vol. 12, No. 4, 2000, pp. 273–288. https://doi.org/10.1023/A:1018977404843.[43] Buhmann, M. D.,
Radial basis functions: theory and implementations , Vol. 12, Cambridge university press, 2003.[44] Preparata, F. P., and Shamos, M. I.,
Computational Geometry , Springer New York, New York, NY, 1985. https://doi.org/10.1007/978-1-4612-1098-6.[45] Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser,W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R.,Larson, E., Carey, C. J., Polat, İ., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I.,Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors,“SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,”
Nature Methods , Vol. 17, 2020, pp. 261–272.https://doi.org/10.1038/s41592-019-0686-2.
46] Coxeter, H. S. M.,
Regular polytopes , Courier Dover Publications, 1973.[47] Storn, R., and Price, K., “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over ContinuousSpaces,”
Journal of Global Optimization , Vol. 11, No. 4, 1997, pp. 341–359. https://doi.org/10.1023/A:1008202821328.[48] Schmidt, M., Safarani, S., Gastinger, J., Jacobs, T., Nicolas, S., and Schülke, A., “On the Performance of DifferentialEvolution for Hyperparameter Tuning,” , IEEE, 2019, pp. 1–8.https://doi.org/10.1109/IJCNN.2019.8851978.[49] Mezura-Montes, E., Reyes-Sierra, M., and Coello, C. A. C., “Multi-objective optimization using differential evolution: a surveyof the state-of-the-art,”
Advances in differential evolution , Springer, 2008, pp. 173–196. https://doi.org/10.1007/978-3-540-68830-3_7.[50] Teichmann, M., and Capps, M., “Surface reconstruction with anisotropic density-scaled alpha shapes,”
Proceedings Visualiza-tion’98 (Cat. No. 98CB36276) , IEEE, 1998, pp. 67–72. https://doi.org/10.1109/VISUAL.1998.745286.[51] Putnam, Z. R., and Braun, R. D., “Extension and enhancement of the Allen–Eggers analytical ballistic entry trajectory solution,”
Journal of Guidance, Control, and Dynamics , Vol. 38, 2015, pp. 414–430. https://doi.org/10.2514/1.G000846.[52] Kemp, N. H., and Riddell, F., “Heat transfer to satellite vehicles re-entering the atmosphere,”
Journal of Jet Propulsion , Vol. 27,No. 2, 1957, pp. 132–137. https://doi.org/10.2514/8.12603.[53] Beck, J., Merrifield, J., Holbrough, I., Markelov, G., and Molina, R., “Application of the SAM Destructive Re-Entry Code to theSpacecraft Demise Integration Test Cases,” , Lisbon, 2015.[54] Trisolini, M., Lewis, H. G., and Colombo, C., “Spacecraft design optimisation for demise and survivability,”
Aerospace Scienceand Technology , Vol. 77, 2018, pp. 638–657. https://doi.org/10.1016/j.ast.2018.04.006.[55] Lu, P., “Entry guidance: a unified method,”
Journal of Guidance, Control, and Dynamics , Vol. 37, No. 3, 2014, pp. 713–728.https://doi.org/10.2514/1.62605.[56] Tewari, A.,
Atmospheric and Space Flight Dynamics: Modeling and Simulation with MATLAB ® and Simulink ® , Birkhäuser,2007.[57] National Oceanic and Atmospheric Administration, “U.S. Standard Atmosphere 1976,” , 1976.[58] Kluever, C., “Entry guidance performance for Mars precision landing,” Journal of guidance, control, and dynamics , Vol. 31,No. 6, 2008, pp. 1537–1544. https://doi.org/10.2514/1.36950., Vol. 31,No. 6, 2008, pp. 1537–1544. https://doi.org/10.2514/1.36950.