Improving impact monitoring through Line Of Variations densification
aa r X i v : . [ a s t r o - ph . I M ] J u l Improving impact monitoring through Line Of Variations densification
A. Del Vigna a,b , F. Guerra a , G. B. Valsecchi c,d a Space Dynamics Services s.r.l., via Mario Giuntini, Navacchio di Cascina, Pisa, Italy b Dipartimento di Matematica, Universit`a di Pisa, Largo Bruno Pontecorvo 5, Pisa, Italy c IAPS-INAF, via Fosso del Cavaliere 100, 00133 Roma, Italy d IFAC-CNR, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
Abstract
We propose a densification algorithm to improve the Line Of Variations (LOV) method for impact monitoring, whichcan fail when the information is too little, as it may happen in difficult cases. The LOV method uses a 1-dimensionalsampling to explore the uncertainty region of an asteroid. The close approaches of the sample orbits are grouped bytime and LOV index, to form the so-called returns, and each return is analysed to search for local minima of thedistance from the Earth along the LOV. The strong non-linearity of the problem causes the occurrence of returnswith so few points that a successful analysis can be prevented. Our densification algorithm tries to convert returnswith length at most 3 in returns with 5 points, properly adding new points to the original return. Due to the complexevolution of the LOV, this operation is not necessarily achieved all at once: in this case the information about theLOV geometry derived from the first attempt is exploited for a further attempt. Finally, we present some examplesshowing that the application of our method can have remarkable consequences on impact monitoring results, inparticular about the completeness of the virtual impactors search.
Keywords : Impact Monitoring, Line Of Variations, Densification, Generic completeness
1. Introduction
The main goal of impact monitoring is to establish whether an Earth-crossing asteroid could possiblyimpact our planet. This activity has to be performed as soon as new asteroids are discovered or as newobservations are added to prior discoveries, and the resulting information has to be immediately spread tosolicit follow-up. Currently, there are two independent impact monitoring systems, namely clomon -2 andSentry, respectively operating at SpaceDyS and JPL , and providing the list of asteroids with a non-zeroprobability of collision with the Earth within a century.Both clomon -2 and Sentry are based on the LOV method explained in Milani et al. (2005a). The basicidea is to represent the uncertainty region of the asteroid through the LOV, a curve in the initial orbitalelements space, and to study its dynamical evolution in the future. Since an analytical way to treat theproblem cannot exist, the LOV is sampled by means of a suitable number of points (see Section 2.1), whichare then propagated for 100 years in order to keep track of all the close approaches with the Earth. Aclassical tool to study a close approach is the associated Target Plane (TP) (Valsecchi et al., 2003). One ofthe advantages of using the TP is that it translates the possibility of an impact to a very simple geometriccondition, that is, the intersection of the trajectory with the TP has to be inside a disk centred in the Earthand with a suitable radius accounting for gravitational focusing. Therefore, the strategy of the LOV methodis to assess the possibility of a collision in a given close approach by inferring the LOV geometry on thecorresponding TP from the sampling nodes only. The reliability of this study thus depends on the numberof LOV orbits that intercept the TP: in particular, if they are a few the information is sometimes too littleto draw any conclusion. In practice, even if there exists an impacting portion of the LOV, in such cases Email address: [email protected] (A. Del Vigna) http://newton.spacedys.com/neodys2/index.php?pc=4.1 http://cneos.jpl.nasa.gov/sentry/ Preprint submitted to Elsevier July 7, 2020 ts detection can be missed due to the lack of information. The worst case is represented by singletons,which appear when the strong non-linearity leaves only a lone point of the original sampling on the TP (seeSection 3). Although there exist LOV sampling techniques that in principle guarantee a complete detectionof the impact possibilities with probability down to a certain level, the previously outlined issues implythat this search cannot be fully complete in practice. This is a well-known problem, as pointed out inDel Vigna et al. (2019a), where the authors suggested a densification technique as a possible solution.We propose a densification method consisting in the addition of new sample orbits in the LOV portionwhose image on the TP is composed of no more than three points. This threshold is based on the completenessanalysis presented in Del Vigna et al. (2019a), where it is shown that the loss of completeness actually occurswhen the number of points on the TP is 1, 2, or 3. Another peculiarity of our method lies in the selectionof the points to add, which is not a simple task since the densification is meaningful exactly in the mostdifficult cases. Indeed, as we explain in Section 4, the algorithm is divided in two parts to account for thepossible complexity of the LOV geometry on the TP. We tested our procedure on real cases, showing theactual improvements it brings about the impact monitoring problem (see Section 5).
2. The impact monitoring problem
In this section we briefly recall the main ideas of the LOV method for impact monitoring, introducingthe notation and the definitions needed in what follows. The starting point is the solution x ∗ ∈ R N of anon-linear least squares fit, along with its covariance matrix Γ – Γ( x ∗ ) belonging to the space of the N × N real matrices M ( N ; R ) (Milani and Gronchi, 2010). The standard case is N = 6, when the fit parameters arethe six orbital elements, but N can be also larger if, additionally, some dynamical parameter is determined, e.g. , when accounting for the Yarkovsky effect (Vokrouhlick´y et al., 2000). According to the probabilisticinterpretation of the least squares method, the nominal solution is surrounded by a set of orbits that arestill compatible with the observational data set, the so-called confidence region . The prediction of possibleimpacts with the Earth has to consider all these orbits, up to a fixed confidence level σ >
0. When thenon-linearity is mild, the confidence region can be approximated by the confidence ellipsoid Z Xlin ( σ ) := (cid:8) x ∈ R N : ( x − x ∗ ) ⊤ C ( x ∗ )( x − x ∗ ) ≤ σ (cid:9) , where C ( x ∗ ) – Γ( x ∗ ) − is the normal matrix. The purpose of impact monitoring is to scan the confidence region looking for
Virtual Impactors (VIs),which are connected subsets of initial conditions leading to a collision with the Earth. To this end, theconfidence region is sampled by a finite set of orbits, called
Virtual Asteroids (VAs). Currently, the algorithmshared by clomon -2 and Sentry uses a 1-dimensional sampling method based on the LOV, which is a smoothline in the orbital elements space (Milani et al., 2005b). The main advantage of this approach is that theset of VAs has a geometric structure, that is they belong to a differentiable curve along which interpolationis possible. The LOV sampling computation provides a set of orbits { x ( σ i ) } i = − M, ..., M , where σ i are theLOV parameters. The next step for impact monitoring consists in the propagation of each VA in the future(Milani et al., 1999), commonly for a time span of 100 years . As anticipated in the introduction, to detectthe close approaches of a VA we first consider the TP, which is the plane passing through the Earth centreand orthogonal to the incoming asymptote of the hyperbola defining the two-body approximation of theVA trajectory at the time of closest approach. To avoid geometric complications, we call “close” only thoseapproaches with a distance from the Earth centre of mass not exceeding some value, commonly fixed to Non-gravitational forces can be relevant for a reliable impact risk assessment when having a longer time horizon for thepotential impact search. Currently, there are four cases that required the inclusion of the Yarkovsky effect in term of hazardassessment: (101955) Bennu (Milani et al., 2009; Chesley et al., 2014), (99942) Apophis (Chesley, 2006; Giorgini et al., 2008;Vokrouhlick´y et al., 2015; Farnocchia et al., 2013), (29075) 1950 DA (Giorgini et al., 2002; Farnocchia and Chesley, 2014), and(410777) 2009 FD (Spoto et al., 2014; Del Vigna et al., 2019b). T P = 0 . R T P . Lastly, to keep track of a close approachwe define a function f : R N → R that maps an orbit x experiencing a close encounter with the Earth to apoint y = ( ξ, ζ ) ∈ R on the TP. This function is the composition between the propagation from the initialepoch to the closest approach time and the conversion to the TP coordinates. Actually, inside a given closeapproach there can be several local minima of the geocentric distance: the definition of f can be extendedto each of these minima and consequently, in general, there is more than one TP trace corresponding to asingle orbit x .According to Milani et al. (2005a), the list of the close encounters of all the VAs is decomposed into showers and returns . In particular, first the close approaches are clustered by date to obtain the showersand each shower is further divided in LOV segments with consecutive indices, the returns. It can happenthat there is not a clear clustering in time among the encounters, causing the presence of very long showersand possibly multiple occurrences of some VA in the same return. In such cases, a further decompositionscheme is applied, as described in Del Vigna et al. (2019a), in order to produce returns free of duplications:this makes the interpolation along the LOV possible inside a return. Moreover, the algorithm guaranteesthe completeness of the decomposition procedure, as proved in the paper.Each return corresponds to a set of TP traces, that is a sampling of the projection of the LOV segmentassociated to the return itself. Then, each couple of consecutive TP traces y ( σ j ) and y ( σ j +1 ) of each returnis analysed to understand the geometry of the LOV in between, as explained in Milani et al. (2005a). Moreprecisely, if r ( σ ) := ξ ( σ ) + ζ ( σ ) is the squared distance of the LOV point x ( σ ) from the Earth centre, theaim is to find the local minima of r in [ σ j , σ j +1 ]. This information is provided by the sign and the zeroes of f ( σ ) := dr dσ ( σ ) . Inside a return, only some intervals between consecutive VAs contain a minimum of r and they are identi-fied by a geometric classification of the TP segment between y ( σ j ) and y ( σ j +1 ) (see Milani et al. (2005a),Table 1). If the minimum approach distance can be small, by applying suitable iterative methods ( regulafalsi and Newton method with bounded steps) it is possible to determine the minimum distance and the cor-responding LOV orbit x ( σ ∗ ), with σ ∗ ∈ ( σ j , σ j +1 ). If the corresponding TP trace y ( σ ∗ ) is inside the Earthimpact cross section, then x ( σ ∗ ) is an impactor and, by continuity of f , there exists a suitable neighbourhoodof x ( σ ∗ ) made up of impacting orbits: in this case we can claim to have found a VI.The main assumption for the local TP analysis is the principle of simplest geometry , stating that thegeometry of the LOV in each interval [ σ j , σ j +1 ] is as simple as possible. This translates into two assumptions:(SG1) the function f is defined over the whole interval [ σ j , σ j +1 ], i.e. , the LOV projection between the twotraces does not exit the TP disk;(SG2) the LOV geometry on the TP is the simplest one compatible with the known information at thenodes, such as the tangent vectors to the LOV projection and the sign of f at σ j and σ j +1 .If the return contains a large number of points, the previous hypotheses are a good approximation, makingthe TP analysis easier: problems can arise when the return is made up of a few points. In fact, the LOVgeometry on the TP can be very wild and difficult to guess, having the information in the nodes only. Sinceevery close encounter introduces non-linearity, the LOV behaviour on the TP becomes progressively morecomplex as the number of close approaches increases. Indeed, each close approach typically stretches theLOV on the TP of the subsequent encounter: the more this effect accumulates, the less is the number ofpoints gradually found on the TP. The densification of the LOV sampling allows exactly the treatment ofthese cases, as explained in Section 4.Two important local quantities to measure non-linear effects are the stretching and the width , defined tobe the square root of the eigenvalues of the propagated covariance matrix on the TP. From a geometric pointof view, since the differential of f maps the confidence ellipsoid Z Xlin ( σ ) onto the confidence ellipse Z Ylin ( σ )on the TP, the stretching and the width can be seen as the lengths of the semimajor and semiminor axisof Z Ylin (1). Another quantity which is meaningful for our problem is the so-called stretching along the LOV ,3iven by (cid:12)(cid:12)(cid:12) d y dσ (cid:12)(cid:12)(cid:12) , since it represents the displacement of two TP points as a function of the difference betweenthe corresponding LOV parameters. Note that the stretching and the stretching along the LOV coincide inthe linear approximation. A key concept in impact monitoring is the completeness of the VI search. The completeness limit canbe formally defined as the highest impact probability VI that can escape the detection. However, since thisquantity cannot be explicitly computed, it is replaced with the generic completeness limit , further assumingthe full linearity of f and that a single point on the TP is enough to identify a VI Milani et al. (2005a). Notethat the first hypothesis implies in particular that the trace of the LOV on the TP is simply a straight linepassing through the Earth centre.The completeness of the VI search is intimately related to the LOV sampling method. Indeed, eachpair of consecutive LOV points is mapped to a given TP and the distance between the corresponding tracesdepends on the differential of f . In Del Vigna et al. (2019a) it is proved that there exists an optimal step-sizechoice to achieve a fixed generic completeness level IP ∗ . In particular, the step-size turns out to be inverselyproportional to the probability density along the LOV , resulting in a sampling that is denser around thenominal solution and more sparse towards the LOV tips. A maximum value for the step-size is thus used toavoid low resolution in the tail of the distribution.
3. Motivations for LOV densification
Since the generic completeness limit is a quantity defined under some simplified assumptions, there isno guarantee that all the VIs with probability
IP > IP ∗ are actually found, thus the completeness levelactually achieved has to be measured a posteriori . One possibility is explored in Del Vigna et al. (2019a),based on an empirical law to model the total number of VIs N as a function of the impact probability. Inparticular, the study shows that N is proportional to the power-law IP − for IP > IP ∗ , and this resultis obtained by fitting the contour of the histogram of N as a function of IP . The difference between thefitted curve corresponding to the power-law and the histogram implies that there is a loss of efficiency in theVI search for impact probabilities slightly above the generic completeness level IP ∗ , that is the number ofactually detected VIs is less than the expected one. Indeed, the definition of generic completeness assumesthat a single point on the TP is enough to find a VI if it exists, but in some difficult cases this hypothesisis not satisfied and, even worse, the missing detection of a VI can occur also when there are a few pointson the TP. A densification of the LOV sampling helps in revealing the actual geometry of the LOV on theTP, which in turn should fill the gap between the actual completeness level and the theoretical value IP ∗ .The scope of our densification technique is to convert returns with very few points ( ≤
3) into returns with5 points, to let the VI search more effective even in these cases.The strong non-linearity of the map f can make the principle of simplest geometry non-reliable. Forinstance, the function f could not be defined over the whole interval [ σ j , σ j +1 ]: this means that the TP ismissed for some value of σ , indicating that the two TP points under consideration do not actually belongto the same return. If at some stage during the application of an iterative method the TP is missed for thecurrent value of σ , the algorithm cannot proceed. Another possibility is the violation of assumption (SG2)due to the occurrence of multiple local minima of r in the same interval between consecutive LOV points, sothat the iterative method applied converges at most to one of these local minima. The densification methoddescribed in this paper is in principle able to solve both these problems: indeed, it allows one to properly cutaway from the study a LOV portion that possibly misses the TP (failure of (SG1)) and to separate multiplelocal minima in different LOV intervals (failure of (SG2)), as described in Section 4. The probability density function along the LOV is the one-dimensional Gaussian p ( σ ) = √ π exp (cid:16) − σ (cid:17) , where the mean σ = 0 corresponds to the nominal orbit x ∗ . clomon -2 treats them using the Newton method with boundedsteps Milani et al. (2005a), but this is not sufficient since it often leads to an unsuccessful or incomplete VIdetection. Our densification technique, converting in particular singletons in returns with more points, easesthe TP analysis in such demanding cases and in turn increases the completeness of the VI scan.
4. Densification procedure
As anticipated in Section 3, our algorithm consists in densifying returns with 1, 2 or 3 points. The basicidea is to properly select intermediate LOV indices for the new points to add to the return and then to applythe standard analysis (Section 2.1) to the densified return.Formally, let R be a return of length n R ≤
3, let r < · · · < r n R be the (integer) indices of its points,and let t S i and t S f be the initial and final times of the shower S containing R . If r is one of the real indicesselected for the densification (see Section 4.1), the further steps of our method are the following:(D1) computation of the LOV parameter σ r corresponding to r ;(D2) computation of the LOV orbit x ( σ r );(D3) computation of the TP trace y ( σ r ), if it exists;(D4) if step (D3) is successful, add y ( σ r ) to the set of TP traces related to R .We denote with D r the ensemble of steps (D1)-(D4) above. To accomplish steps (D1) and (D2), first wecompute the nearest integer ¯ to r , which corresponds to the VA x ( σ ¯ ) of the initial sampling. Then theLOV parameter σ r and the LOV orbit x ( σ r ) are determined by applying one step of the standard LOVsampling algorithm (Milani et al., 2005b). Regarding step (D3), in order to compute the trace y ( σ r ), thecorresponding orbit x ( σ r ) needs to have a close approach in the time interval [ t S i , t S f ]. If this happens, weattribute the trace y ( σ r ) to the return R ; otherwise, step (D3) fails.To densify R , we compute d – − n R real indices, corresponding to the points we would like to add inorder to convert R into a return with 5 points. In this way we obtain a set of 5 real indices s < · · · < s ,containing the original return indices r , . . . , r n R and the new indices t , . . . , t d . The details for the indexselection are provided in Section 4.1. Once the indices are chosen, we apply the procedure D t ℓ to add thepoint with real index t ℓ to the return, for all ℓ = 1 , . . . , d . Since D t ℓ may fail, we associate to every index t ℓ a Boolean value b ( t ℓ ), which is 1 or 0 in case the corresponding point has been successfully added to thereturn or not, respectively. Moreover, for every i = 1 , . . . , n R , we set b ( r i ) = 1 as the point with index r i already belonged to R .At the end of this first phase, some of the b ( t ℓ ) can be 0: in case, a second attempt for adding newpoints is performed. More precisely, we analyse each couple of consecutive indices s ℓ and s ℓ +1 , having threepossible cases according to their value of b . • If b ( s ℓ ) = b ( s ℓ +1 ) = 1 we assume that the LOV trace between these two indices is entirely containedin the TP disk. Thus we do not add a new point in between. • If b ( s ℓ ) = b ( s ℓ +1 ) = 0 we assume that the LOV trace is entirely outside the TP disk. Also in this case,no further point is considered. • If b ( s ℓ ) and b ( s ℓ +1 ) are different, the LOV is partially contained in the TP disk. Without loss ofgenerality, suppose that b ( s ℓ ) = 1 and b ( s ℓ +1 ) = 0. Then we try to add the point with index s ℓ + s ℓ +1 − s ℓ k , (1)where k is the lowest value in { , . . . , k max } for which the densification procedure succeeds. For theresults of this paper we assume k max = 3. 5he above assumptions rely on the principle of simplest geometry, which is more and more reasonable asthe intervals become smaller, as it is the case when densifying.Summarising, the aim of this second attempt is to add a new point between each couple of consecutivepoints with discordant b as resulting from the first attempt. Nevertheless, even this further attempt mayfail in particularly difficult cases, so the final densified return could not have the maximum possible numberof points. Note that this maximum number is not necessarily 5, since it can happen that the first attemptproduces a configuration such that b ( s ℓ − ) = 1, b ( s ℓ ) = 0, and b ( s ℓ +1 ) = 1, then a full success of the secondattempt adds two points, one with index between s ℓ − and s ℓ and the other one between s ℓ and s ℓ +1 . The indices for the first attempt of the densification are selected according to the number of points n R of the starting return. As anticipated, the method is applied for returns with n R = 1 (singletons), n R = 2(doubletons) or n R = 3 (tripletons). The densification of a singleton is achieved by adding some other LOVpoints around it, whereas for doubletons and tripletons no points are placed outside the return, since theanalysis before the head and after the tail is already performed with the Newton method with boundedsteps, when necessary. Singletons.
The idea behind the densification of a return with only one point is to exploit the local quantitiesat that point. In particular, let σ r be the LOV parameter of the singleton, let y – y ( σ r ) = ( ξ , ζ ) be thecorresponding TP trace, let S – d y dσ ( σ r ) be the derivative vector, and let α ∈ ( − π, π ] be the angle from S to the ζ -axis, so that b S = (sin α , cos α ) is the unit vector of S . We consider the chord of the TP diskparallel to b S and passing through y : this chord is divided into two segments, one after and one before y according to the direction of b S (see Figure 1). Their length is respectively λ ± = q R T P − ( ξ cos α − ζ sin α ) ± ( ξ sin α + ζ cos α ) . The lengths λ ± can be converted into lengths of intervals in the LOV parameter by dividing them forthe local stretching along the LOV S := | S | . This constitutes just an estimate, since a full computationwould require the knowledge of the stretching as a function of σ , whereas we only have the value S in σ r .Furthermore, the previous conversion has to take into account that the resulting length in σ cannot exceedthe local step-size value ∆ σ r : indeed, by definition of singleton, the neighbouring VAs miss the TP to which y belongs. Hence, we compute the lengths in σ as∆ σ ± – min (cid:26) λ ± S , δ · ∆ σ r (cid:27) . For the results of this paper we assume δ = 0 .
95 as a security factor.Suppose that f ( σ r ) = dr dσ ( σ r ) >
0, that is, moving along the LOV in positive direction, the distancebetween the corresponding TP trace and the Earth centre increases. In this case we place three points before r and one point after. More precisely, according to the notation introduced in Section 4, we have d = 4 andwe select as indices t k = r − − k σ − ∆ σ r , for k = 1 , , , and t = r + 12 ∆ σ + ∆ σ r . The other case, that is f ( σ r ) = dr dσ ( σ r ) <
0, is treated analogously.
Doubletons.
The return is composed of two points with indices r < r , so that n R = 2 and d = 3. Thethree points to add for the densification are placed between the two points of the return: the selected indicesare t k = r + k r − r , for k = 1 , , . ξ y ξ ζ α S λ + λ − • Figure 1: Geometrical construction for the densification of a singleton.
Tripletons.
The return is composed of three points with indices r < r < r , so that n R = 3 and d = 2.We place one point between each couple of consecutive points, that is we select as indices t = r + r t = r + r . During the return analysis, whenever possible, the search of the zeroes of f is performed on intervals[ σ h , σ h ] such that f ( σ h ) f ( σ h ) <
0, where h < h are two real indices. In case the return is not densified,we have h − h = 1 since the corresponding LOV points are just consecutive VAs. On the other hand,when the densification procedure is successfully applied to the return, we have h − h <
1. In the firstcase we adopt an accelerated version of the modified regula falsi , according to Milani et al. (2005a). Inprinciple that method could also be applied in the second case, but numerical investigations showed thatconvergence is not always guaranteed. Indeed the computation of the function f is not numerically stable,since it involves the propagation to the TP and the computation of the eigenvalues of the propagatedcovariance matrix on the TP, and this effect is increasingly amplified as the interval becomes more and moresmall. For this reason when h − h < f is not defined over the whole interval [ σ h , σ h ] (see Section 3) orthe maximum number of iterations is exceeded. Despite the use of the densification procedure lowers thepossibility of failures, there is no guarantee that they do not occur anymore. Anyway, by numerical evidence,the bisection method over intervals with length less than 1 turns out to be the most effective one.7 . Results The results presented in this section were obtained with the software AstOD, developed in the frameworkof ESA SSA-NEO program. The software covers both the orbit determination (OD) and the impact moni-toring (IM) functionalities: the OD component is operational at the NEO Coordination Centre (NEOCC )since 2017, whereas the IM component was delivered in Spring 2019. We introduced the densification procedure in our software AstOD and we performed the impact moni-toring on a list of objects currently present in the NEODyS Risk List, as of June 2019. For each selectedasteroid we computed two impactor tables, respectively with and without the densification. Here we reportthe results for two sample objects, namely 2017 WT and 2008 JL . For both cases we adopted a non-linearLOV sampling with the setup IP ∗ = 1 × − , σ max = 5 , ∆ σ max = 0 . . Moreover, we add a separate section for the special case (29075) 1950 DA: its impact monitoring is remark-ably demanding since it requires taking into account non-gravitational perturbations in the long-term orbitpropagation, differential corrections, LOV sampling and TP analysis as well as a careful computation of theimpact probability.The results of the densification procedure are described in details for some significant returns. We provideeach example with a diagram to give a quick view of the application of the algorithm. The diagrams sharethe following basic graphical conventions: • the indices s , . . . , s corresponding to the first attempt of densification are marked with a cross; • the indices corresponding to the second attempt of the procedure are marked with a star; • each point successfully added is surrounded by a grey circle; • the indices r , . . . , r n R of the original LOV sampling are surrounded by a double cray circle; • the location of each VI representative is indicated by a blue arrow. Asteroid 2017 WT . This asteroid is a small ( H = 28 .
1) NEA of the Aten group, with a non-negligiblechance of impacting the Earth in the next century. The currently available astrometry is quite limited,consisting of 24 optical observations spanning from November to December 2017. Indeed, the orbit is notvery well-constrained, so that the LOV can extend very far from the nominal orbit. The chaoticity introducedby subsequent close approaches causes a complex behaviour of the LOV on the corresponding TPs. Thisresults in a very large amount of returns with a few points, causing a large number of application of ourdensification procedure. Furthermore, the number of detected VIs is respectively 201 with the densificationand 181 without it. The densification improved the VI search not only because it increased the number of VIsfound, but also because some of the newly discovered VIs have impact probability above the completenesslimit IP ∗ = 1 × − . This is particularly remarkable in light of the discussion of Section 3.The first example, represented in Figure 2, corresponds to a return of the 2112 shower of 2017 WT .The original return is a singleton, that is n R = 1, thus the first phase of the densification tried to add fourpoints. More precisely, since the distance r is decreasing at the singleton ( i.e. , at σ r ), one of the four pointshas index smaller and the other three larger than r , according to Section 4.1. The attempt succeeded for allthe points but the rightmost one, so that in particular b ( s ) = 1 and b ( s ) = 0. This means that the LOVprojection exits the TP after an index between s and s , due to the rapidly varying stretching, which is thecommon situation around a singleton. Therefore, the second phase of the densification tries to add a further http://neo.ssa.esa.int/ . s × s × . s × . s × . s × ⋆ . s Figure 2: Graphical representation of the densification procedure applied to the 2112 return of asteroid 2017 WT . Theoriginal return was a singleton: the first densification attempt resulted in three new points (encircled crosses) and the secondattempt added a further point (encircled star). Clearly, the index values depend on the method used for the LOV sampling:they are reported here for the sake of completeness, but what matters are the relative distances. -1000 0 1000 2000 3000 4000 5000-2500-2000-1500-1000-50005001000 Figure 3: Plot of the LOV projection on the 2112 TP for asteroid 2017 WT . The five black crosses joined by lines representsthe densified return resulting from the densification procedure. In particular, the orange encircled cross marks the originalsingleton. The actual geometry of the return is revealed by the blue circles, which are obtained through a refinement of theprevious densification. The Earth impact cross section is centred in the origin and has radius b C = 2 . R C . point with index in between, this time with success, thus yielding a final densified return with 5 points.Figure 3 shows the original and densified returns on the 2112 TP. From the shape of the densified return itis clear that a minimum of the distance along the LOV exists and that it is located between its third andfourth point. Indeed, the distance r is decreasing at s and increasing at s , that is its derivative f ( σ ) isnegative at the first endpoint and positive at the second one. Moreover, since the angles α and α betweenthe tangent vectors to the LOV projection and the ζ -axis indicate a large curvature of the LOV, we have aninterrupted return (Milani et al., 2005a). This is also confirmed by the plot of Figure 3, showing a furtherrefined sampling of the return. Additionally, the figure makes clear the existence of a VI since the refinedcurve intersects the Earth disk. To find the minimum of r over [ σ s , σ s ], the bisection method was applied,as it always happens for intervals of a densified return (see Section 4.2), yielding the above-mentioned VI,which is in particular on 2112-11-23.98 and has impact probability IP = 1 . × − . It is worth mentioningthat the VI would have been missed without using the densification procedure even if its impact probabilityis above the generic completeness level.The second example is a doubleton in the 2114 shower of the same asteroid. The application of ourprocedure converted the doubleton in a return with 5 points by adding the three uniformly-spaced pointsforeseen in the first attempt (see Section 4.1), as the diagram of Figure 4 represents. Figure 5 shows theoriginal and densified returns on the 2114 TP: the LOV geometry unveiled by the densification procedure wasnot predictable only from the doubleton and it is far from being simple since the LOV projection containsat least two reversals in the portion closer to the Earth. In particular, this allows the existence of at least9 s × . s × . s × . s × s × VI VI s Figure 4: Graphical representation of the densification procedure applied to the 2114 return of asteroid 2017 WT . Theoriginal return was a doubleton: the densification succeeded in adding three new points (encircled crosses) all at once in thefirst attempt. Clearly, the index values depend on the method used for the LOV sampling: they are reported here for the sakeof completeness, but what matters are the relative distances. -1000 0 1000 2000 3000 4000 5000-2500-2000-1500-1000-5000500 Figure 5: Plot of the 2114 TP for asteroid 2017 WT . The orange crosses mark the original doubleton. The five black crossesjoined by lines are the traces of the densified return. The Earth impact cross section is centred in the origin and has radius b C = 2 . R C . two minima of the distance r along the LOV, one between the first and the second point and one betweenthe subsequent pair. The actual LOV geometry can be seen in Figure 5 (blue circles). In the upper partthe LOV passes three times close to the Earth, as suggested by the position of the points corresponding to s and s , then it leaves and re-enters the TP on the right side of the plot. Thus, in this particular case,the resolution foreseen for the doubleton densification turns out to be not sufficient to reveal the split of theLOV in the two components. Nevertheless, the densification anyway improves the knowledge of the LOVgeometry for the portions contained in the TP, which are the relevant ones since the interval containing thesplit does not contain any significant VI.Although also the configuration of the upper LOV portion corresponds to two interrupted returns, unlikethe previous example the derivative f of r assumes the same sign on the endpoints of both the intervals[ σ s , σ s ] and [ σ s , σ s ]. Such cases deserve a special analysis, according to Milani et al. (2005a) (interruptedfailed configuration): in our example this analysis ends up with the detection of one VI in each interval, withimpact probabilities IP = 8 . × − and IP = 1 . × − , respectively, the first on 2114-11-24.63 andthe other on 2114-11-24.75. Note that the second VI is above the completeness level and both VIs would nothave been found without applying the densification procedure, as already stressed for the previous example. Asteroid 2008 JL . This asteroid is a NEA of the Apollo group, currently contained in the upper part ofthe risk list sorted by Palermo Scale, having
P S = − .
95. Its nominal orbit is quite uncertain, due to theshort observational arc which spans only four days. The application of our method to the 2109 return is aparticularly interesting example for several reasons. The original return is a doubleton, which is convertedin a densified return with three additional points at the end of the overall procedure, as shown in Figure 6.10ote that the first phase succeeded just for the last point, so that b ( s ) = 1, b ( s ) = b ( s ) = 0, and b ( s ) = b ( s ) = 1. This means that the LOV is not entirely contained in the TP between s and s and inthe second phase of the procedure the LOV shape on the TP is better understood by adding a point before s and a point after s . The strong non-linearity around the first point of the original return, which is confirmedby the failure in s , makes the addition of the point between s and s more difficult: indeed, the goal isreached at the maximum number of iterations k max = 3 in equation (1). The subsequent return analysisestablished the existence of a minimum of the distance r , located between s and s , and correspondingto an impacting orbit on 2109-04-27.96. The related VI has IP = 2 . × − and would have been missedwithout densifying. As in the previous cases, we plot in Figure 7 a refinement of the densified return samplingto further validate our method: indeed the behaviour of the corresponding blue curve is well-representedby the five points resulting from our procedure, which also accounts for the exit of the LOV from the TP.Moreover, by looking at the two impact monitoring systems, a remarkable fact comes out: Sentry detects theVI with an impact probability comparable to ours, whereas NEODyS does not find it although the impactprobability is above the completeness level. This again shows that the densification can actually improvethe overall efficiency of the system in finding VIs. s × . s × . s × . s × s × ⋆ . ⋆ . ⋆⋆ VI s Figure 6: Graphical representation of the densification procedure applied to the 2109 return of asteroid 2008 JL . The originalreturn was a doubleton: the first densification attempt resulted just in one new point out of three (encircled cross), whereas thesecond attempt provided two further points (encircled star). Note that the two stars without circle indicate that the leftmostpoint added in the second attempt was obtained for k = 3 in (1). Clearly, the index values depend on the method used for theLOV sampling: they are reported here for the sake of completeness, but what matters are the relative distances. -1000 0 1000 2000 3000 4000 5000-150-100-50050100150200250300350 Figure 7: Plot of the LOV projection on the 2109 TP for asteroid 2008 JL . The five black crosses joined by lines representsthe densified return resulting from the densification procedure. Note that there is an interruption between the second and thethird point of the densified return, due to the LOV exit from the TP. The orange encircled crosses mark the original doubleton.The actual geometry of the return is revealed by the blue circles, which are obtained through a refinement of the previousdensification. The Earth impact cross section is centred in the origin and has radius b C = 1 . R C . .2. The special case of (29075) 1950 DA Asteroid (29075) 1950 DA was discovered in 1950 and then lost until December 2000, when it wasrecognised to be the object 2000 YK . The current optical observation data set covers a long arc, from 1950to 2018. Furthermore, 12 radar observations were added during the two apparitions of 2001 and 2012. Thelarge extent of the arc and the availability of radar measurements allow a very precise orbit determination,with the possibility to fit also the Yarkovsky parameter A . The inclusion of the Yarkovsky effect in thedynamical model is needed to make a reliable hazard assessment for the 2880 possible impact (Giorgini et al.,2002; Farnocchia and Chesley, 2014). In particular, the value of A computed by AstOD is ( − . ± . × − au/d , which corresponds to a semimajor axis drift of da/dt = ( − . ± . × − au/My. This iswell-consistent with the A estimate ( − . ± . × − au/d of Del Vigna et al. (2018), obtained withthe OrbFit software.Concerning the impact monitoring of (29075), we considered the 7-dimensional space of the orbitalelements and the Yarkovsky parameter and we adopted the linear approximation of the LOV since the initialconfidence region is small. In particular, a LOV sampling with 1200 points per side over the interval | σ | ≤ y , and let y , y bethe previous and subsequent traces, respectively. As it is clear from Figure 8, the minimum of the distancefrom the Earth centre exists and it is located between y and y . Indeed, the function f has opposite signat the interval endpoints and the LOV curvature is negligible, so the regula falsi method easily achievedconvergence, yielding a VI representative.In general, given a point on the LOV leading to an impact, the impact probability of the associated VIis usually computed by integrating a 2-dimensional linearised probability density function over the Earthimpact cross section on the TP. When the width is small (few kilometres) and the impact probability iscomparatively high, a more accurate estimate can be achieved by integrating the 1-dimensional probabilitydensity function p ( σ ) over the preimage under f ◦ x of the chord resulting from the intersection betweenthe LOV trace and the Earth disk. This preimage is an interval [ σ i , σ f ] in the LOV parameter space andits endpoints can be computed once the endpoints of the chord are known. When σ i and σ f have been -4 -3 -2 -1 0 1 2 3-3-2-10123 Figure 8: LOV projection on the 2880 TP for asteroid (29075), corresponding to the linear LOV with 1200 points per side overthe interval | σ | ≤ IP = 1 √ π Z σ f σ i e − σ dσ. A way to obtain a good approximation of the chord endpoints is to perform a local densification of thereturn between y and y to place a suitable number of points along the chord. This strategy also allowsthe numerical computation of the above integral as IP ≃ √ π n − X j =1 e − σ j ( σ j +1 − σ j ) , where { σ j } j =1 ..., n are the values of the LOV parameter corresponding to the impacting LOV orbits.This case represents another example in which the densification procedure can be useful: here we donot start from a return with a few points, but the aim is to obtain a reliable computation of the impactprobability. In particular, to estimate the impact probability of (29075) in 2880, we convert { y , y , y } in a return with 31 points, 20 of which fall in the Earth impact cross section. The resulting endpoints are σ i = 1 . σ f = 1 . IP = 1 . × − , which is consistentwith the value 1 . × − computed by Sentry. This agreement is particularly remarkable, since the JPLteam performed the impact monitoring with a completely different strategy, based on a Monte Carlo method(Farnocchia and Chesley, 2014).
6. Conclusions
In this paper we presented a densification algorithm to improve the completeness of the VI search whenapplying the LOV method. Indeed, although the LOV sampling proposed in Del Vigna et al. (2019a) shouldguarantee the achievement of a pre-fixed generic completeness level, in practice VIs with impact probabilityclose to the completeness level can escape the detection. Typically, this is the case for returns with very fewpoints, which indicate a strong non-linearity introduced by previous close approaches.The idea of our algorithm is to densify returns with length at most 3 with a procedure consisting oftwo steps. The first attempt tries to obtain returns with 5 points, where the indices of the new points arecomputed according to the structure of the original return. The addition of a new point requires the selectionof its real index, the interpolation of the LOV at that index and the propagation to the TP correspondingto the original return. In particular, this last operation may fail since the LOV can exit the TP around theselected index, breaking the assumption of simplest geometry. In this case, a second attempt is performed toadd a new point between a successful and an unsuccessful point of the first attempt. The resulting possiblydensified return is then analysed in the standard way (see Section 2.1). The whole densification processincreases the global computational load of the impact monitoring run: indeed, on one hand every time thealgorithm tries to add a new point a propagation is performed, and on the other hand longer returns mayincrease the application of the iterative methods to search for minimum distance points. Nevertheless, thisis a minor issue thanks to the currently available computational resources.The results reported in this paper show that our method has two main implications in impact monitor-ing. As the example of 2017 WT suggests, the densification procedure not only increases the number ofcomputed VIs, but also allows the detection of VIs with impact probability above the generic completenesslevel. This result is particularly meaningful since it indicates that our densification method represents a wayto fill the gap between the actual completeness level and the theoretical generic completeness, as discussed inSection 3. In particular, a run of the entire risk list including densification is required to obtain histogramsanalogous to those presented in Del Vigna et al. (2019a). With these new histograms it will be possibleto measure the effective improvement of the densification on the completeness of the impact monitoringproblem. This will be subject of future research.Another example in which the densification technique turns out to be useful is represented by asteroid(29075) 1950 DA. This asteroid is one of the most remarkable cases currently present in the risk lists of both13EODyS and JPL, since it has a comparatively high probability of impacting the Earth in 2880 and itshazard assessment involves also the Yarkovsky effect. As Figure 8 shows, the original LOV sampling alreadycontains an impacting orbit, so the detection of the VI is straightforward. Anyway, a local densificationaround this orbit allows the addition of a suitable number of points in the Earth impact cross section, sothat we can resort to a 1-dimensional estimate of the impact probability, which is known to be more accuratewhen the width is small and the probability is high.Lastly, it is worth mentioning that the densification algorithm can applied for a more general purpose.If a return containing impacting orbits is suitably densified, at least locally in a neighbourhood of the VI,the determination of the VI representative could be improved in such a way that the selected orbit is asclose as possible to the VI centre. This turns out to be important when the VI representative is used asa starting point for further predictions, as it happens for the semilinear method presented in Dimare et al.(2020) to compute the impact corridor of an Earth-impacting asteroid. More general algorithms for a localand possibly adaptive densification of the LOV sampling will be subject of future research. Acknowledgements
This work is devoted to the memory of Prof. Andrea Milani Comparetti. By writing this paper we arekeeping a promise: Andrea gave us the hint for the densification method, which he really cared about. Wewould like to thank him a lot for everything he taught us over the years.A. Del Vigna and F. Guerra acknowledge support by the company SpaceDyS. This research was conductedunder European Space Agency contract no. 4000123583/18/D/MRP “P3-NEO-XIII NEODyS MigrationPart 2”.
References
Chesley, S. R., 2006. Potential impact detection for Near-Earth asteroids: the case of 99942 Apophis(2004 MN4). In: Daniela, L., Sylvio Ferraz, M., Angel, F. J. (Eds.), Asteroids, Comets, Meteors. Vol.229 of IAU Symposium. pp. 215–228.Chesley, S. R., Farnocchia, D., Nolan, M. C., Vokrouhlick´y, D., Chodas, P. W., Milani, A., Spoto, F., Rozitis,B., Benner, L. A. M., Bottke, W. F., Busch, M. W., Emery, J. P., Howell, E. S., Lauretta, D. S., Margot,J.-L., Taylor, P. A., Jun. 2014. Orbit and bulk density of the OSIRIS-REx target Asteroid (101955) Bennu.Icarus 235, 5–22.Conte, S. D., De Boor, C. W., 1980. Elementary Numerical Analysis: An Algorithmic Approach, 3rd Edition.McGraw-Hill Higher Education.Del Vigna, A., Faggioli, L., Spoto, F., Milani, A., Farnocchia, D., Carry, B., Sep. 2018. Detecting theYarkovsky effect among near-Earth asteroids from astrometric data. Astronomy & Astrophysics 617, A61.Del Vigna, A., Milani, A., Spoto, F., Chessa, A., Valsecchi, G. B., Mar. 2019a. Completeness of ImpactMonitoring. Icarus 321, 647–660.Del Vigna, A., Roa, J., Farnocchia, D., Micheli, M., Tholen, D., Guerra, F., Spoto, F., Valsecchi, G. B.,Jul. 2019b. Yarkovsky effect detection and updated impact hazard assessment for near-Earth asteroid(410777) 2009 FD. Astronomy & Astrophysics 627, A1.Dimare, L., Del Vigna, A., Bracali Cioci, D., Bernardi, F., 2020. Use of the semilinear method to predictthe impact corridor on ground. Celestial Mechanics and Dynamical Astronomy 132 (3), 20.Farnocchia, D., Chesley, S. R., Feb. 2014. Assessment of the 2880 impact threat from Asteroid(29075) 1950 DA. Icarus 229, 321–327. 14arnocchia, D., Chesley, S. R., 2014. Assessment of the 2880 impact threat from asteroid (29075) 1950 da.Icarus 229, 321–327.Farnocchia, D., Chesley, S. R., Chodas, P. W., Micheli, M., Tholen, D. J., Milani, A., Elliott, G. T., Bernardi,F., May 2013. Yarkovsky-driven impact risk analysis for asteroid (99942) Apophis. Icarus 224, 192–200.Giorgini, J. D., Benner, L. A. M., Ostro, S. J., Nolan, M. C., Busch, M. W., Jan. 2008. Predicting the Earthencounters of (99942) Apophis. Icarus 193, 1–19.Giorgini, J. D., Ostro, S. J., Benner, L. A. M., Chodas, P. W., Chesley, S. R., Hudson, R. S., Nolan,M. C., Klemola, A. R., Standish, E. M., Jurgens, R. F., Rose, R., Chamberlin, A. B., Yeomans, D. K.,Margot, J.-L., Sep. 2002. Asteroid 1950 DA’s Encounteqr with Earth in 2880: Physical Limits of CollisionProbability Prediction. In: AAS/Division of Dynamical Astronomy Meeting 36