A deterministic-statistical approach to reconstruct moving sources using sparse partial data
aa r X i v : . [ m a t h . NA ] J a n A deterministic-statistical approach to reconstruct movingsources using sparse partial data
Yanfang Liu ∗ , Yukun Guo † and Jiguang Sun ‡ Abstract
We consider the reconstruction of moving sources using partial measured data. A two-step deterministic-statistical approach is proposed. In the first step, an approximate directsampling method is developed to obtain the locations of the sources at different times. Suchinformation is coded in the priors, which is critical for the success of the Bayesian method inthe second step. The well-posedness of the posterior measure is analyzed in the sense of theHellinger distance. Both steps are based on the same physical model and use the same setof measured data. The combined approach inherits the merits of the deterministic methodand Bayesian inversion as demonstrated by the numerical examples.Keywords: inverse source problem, moving point source, direct sampling method, Bayesianinversion, sparse data
The detection and identification of moving targets using waves have many important applicationssuch as radar, underwater acoustics, and through the wall imaging [1, 2, 4, 15]. In this paper,we consider the reconstruction of moving acoustic point sources, which can be mathematicallyformulated as a time domain inverse source problem. Several methods were proposed recentlyfrom the inverse problem community for the time-domain inverse problems of wave equations,e.g., the algebraic method, the time-reversal techniques, the sampling-type methods and themethod of fundamental solutions [3, 6, 8, 20, 22]. See also [10] for a uniqueness result for theinverse moving source problem. In practice, the measured data are usually sparse, partial andcompromised by noises. Classical methods such as the linear sampling method and the directsampling method might not provide satisfactory results. In general, the performance deterioratessignificantly as the data become fewer [9, 19].In recent years, the Bayesian method became popular for inverse problems [12, 21]. Forstatistical inference, the parameters in the physical model are viewed as random variables.Using the Bayes’ formula, a posterior distribution of the unknown parameter can be formallyderived. The main task is then to explore the posterior density function numerically. Statisticalestimates such as the MAP (maximum a posteriori estimate) or CM (conditional mean) are usedto characterize the unknown parameter. For some recent applications of the Bayesian methodfor inverse problems, we refer readers to [13, 18, 26] and the references therein. ∗ Michigan Technological University, Houghton, MI, USA. [email protected] † Harbin Institute of Technology, Harbin, P. R. China, [email protected] ‡ Michigan Technological University, Houghton, MI, USA. [email protected]
We begin with the mathematical model governing the wave source radiation. Denote by Ω Ă R a simply-connected bounded domain. Consider the wave propagation due to a point excitationin a homogeneous and isotropic media. The speed of the wave in the background media isdenoted by the constant c . Let d Ω : “ sup x,y P Ω } x ´ y } and T ą d Ω { c be the terminal time formeasuring the radiated data.For a single point source, the radiated wave field u satisfies the following wave equation c ´ B tt u p x, t q ´ ∆ u p x, t q “ λ p t q δ p x ´ z p t qq in R ˆ p , T s , (2.1a)with initial conditions u p x, q “ B t u p x, q “ R , (2.1b)where δ denotes the Dirac delta distribution, λ : R Ñ R is the temporal pulse (magnitude of theimpulsive load) and λ p t q “ t ă z : R ` Ñ Ω is the trajectory of the moving point source.2e refer to [22] for a regularity result on the solution of problem (2.1) in some spatial-temporalSobolev spaces.Throughout the paper, we assume that z P C r , T s . Furthermore, we assume that the pointsource moves slowly compared with the speed of the wave c , that is, its velocity v : “ d z { d t satisfies } v p t q} ! c, @ t P r , T s . (2.2)This is the case for many radar applications such as through-wall imaging radars where c isthe speed of electromagnetic waves and v p t q is the speed of the visually obscured target, e.g., ahuman being inside the building [15, 23].The direct problem can be described as follows: given z p t q and λ p t q , find the radiating field u satisfying (2.1). It is well known that the solution to this problem is given explicitly by theLi´enard–Wiechert retarded potential (see, e.g., [11]): u z p x, t q “ λ p τ q π } x ´ z p τ q} ´ ´ v p τ q¨p x ´ z p τ qq c } x ´ z p τ q} ¯ , t P p , T s , (2.3)where the retarded time τ is the solution of the equation t “ τ ` c ´ } x ´ z p τ q} . When z p t q ” z is a static point source, the solution to (2.1) is simply u z p x, t q “ λ p t ´ c ´ } x ´ z }q π } x ´ z } . (2.4)For a fixed small enough T , since } v p t q} ! c, t P r , T s , it holds that | t ´ τ | ! ˇˇˇˇ v p τ q ¨ p x ´ z p τ qq c } x ´ z p τ q} ˇˇˇˇ ! . Let U T : “ Γ ˆ r , T s and define } u p x, t q} U T : “ ş T ş Γ | u p x, t q| d x d t . We have that } u z p x, t q ´ u z p x, t q} U T : “ ż T ż Γ | u z p x, t q ´ u z p x, t q| d x d t ! . (2.5)In this paper, we consider the time domain inverse source problem (TISP) of recoveringthe trajectory z p t q of a moving point source for (2.1) from the measurement data u p x, t q| Γ ˆp ,T s ,where Γ Ă R z Ω is a measurement surface (see Fig 1 for a geometric illustration of the problem).In particular, we are interested in the case of partial data, e.g., when Γ is a fraction of a sphere.The temporal pulse λ p t q P C r , T s is assumed to be a periodic function with period p P R ` . In this section we shall develop an approximate direct sampling method (ADSM) for the inverseproblem TISP to recover the path z p t q from the measurement data u p x, t q| Γ ˆp ,T s . The sampling-type methods share the basic idea of identifying the unknown target by constructing someindicator functions over the probing/sampling region [5]. Once the indicators are evaluated,3 Dz p t q u p x, t q ΓFig 1. An illustration of the time-dependent inverse source problem.the geometrical information (e.g., location and shape) of the target could be recovered using apoint-wise criterion to determine whether the sampling point lies inside or outside the target.As one of the sampling-type methods, the direct sampling method became popular recentlydue to several compelling features. The direct sampling method is fast and easy to implement.Moreover, they are robust to noise-contaminated measurements in general. However, the directsampling methods usually require a large amount of measurements and are inherently qualita-tive. Nonetheless, the direct sampling methods can provide useful information even for partialmeasurement data.Let D be a bounded domain in R such that Ω Ă D . The measurement sensors are locatedon a surface Γ Ă R z ¯ D (see Fig. 1). Define φ p x, t ; y q “ λ p t ´ c ´ } x ´ y }q π } x ´ y } , p x, t ; y q P Γ ˆ p , T s ˆ D. (3.1)Let t T , ¨ ¨ ¨ , T J u be a partition of p , T s such that T j “ pp j ´ q p, jp s , j “ , , ¨ ¨ ¨ , J . Since thetemporal function λ p t q is a periodic function with period p and } v } ! c , the displacement of thepoint source is small on each T j . Thus z p t q can be approximated by a stationary point y P R on T j .Given the measurement data u p x, t q with x P Γ and t P T j , j “ , , ¨ ¨ ¨ , J , we propose thefollowing indicator function I p y ; T j q : “ ż T j |x u p x, t q , φ p x, t ; y qy L p Γ q |} u p x, t q} L p Γ q } φ p x, t ; y q} L p Γ q d t, y P D, (3.2)where x¨ , ¨y L p Γ q denotes the usual L inner product. Note that, in (3.2), instead of the exactradiating field for a moving point source y P D , we use an approximated field φ for a sta-tionary point source in (3.1). This is why we call the following algorithm the approximateddirect sampling method (ADSM). It is clear that the use of φ simplifies the computation of theindicators.The numerical results in Section 5 show that this indicator function approaches its maximumwhen the sampling point y tends to the exact instantaneous location of the point source at eachtime steps.Now we are ready to present the approximate direct sampling method for the TISP.ADSM: Approximate Direct Sampling Method4 tep 1: Collect the data u p x, t q at the measurement points t x l u P Γ, a sequence of discretetime t t nj u P p , T q and for each j , t t nj u P T j ; Step 2:
Generate the sampling points T h for D ; Step 3:
Compute I p y ; T j q for the sub-interval T j at sampling point y P T h ; Step 4:
Locate the maximum of I p y j ; T j q for each j , take the corresponding y j as anapproximation of z p t q on T j . The sequence of locations t y j u Jj “ is the reconstructed movingpath.The performance of the ADSM depends on how much data u p x, t q are available. If themeasured data are complete or nearly complete, for example, Γ is a sphere with D inside and u p x, t q is available for x P Γ and t P p , T s , the algorithm can produce a good reconstruction of thetrajectory of the moving point source z p t q . However, in practice, the measured data u p x, t q areusually partial and the reconstructions can be unsatisfactory. Nonetheless, the ADSM providesuseful information of the moving source. Such information can be coded in the priors for theBayesian inversion to obtain improved reconstructions. In this section, we employ the Bayesian inversion to refine the results obtained by the ADSM. TheTISP is reformulated as a statistical inference problem for the source location. The approximatelocation obtained by the ADSM is coded in the priors for Bayesian inversion. The well-posednessof the posterior density function is analyzed using the Bayesian approximation error approach[12] and the framework in [21]. An MCMC algorithm is then used to explore the posteriordensity function of the source location.
The inverse problem can be viewed as seeking information about the unknown q given themeasurement m under the model function F , which might be inaccurate and contain noise ξ ,i.e., given m , reconstruct q such that m “ F p q, ξ q . (4.1)For statistical inverse problems, the parameters in (4.1) are treated as random variables.Denote the probability density function and the probability measure of a random variable γ by π γ and µ γ , respectively. Note that the information about q obtained by the ADSM will becoded into the prior density π p q q . The probability of m given q , which is called the likelihoodfunction, is denoted by π p m | q q .Bayesian inversion seeks the posterior distribution π p q | m q . Using the Bayes’ theorem, thejoint distribution of the associated variables can be decomposed as π p q, m, ξ q “ π p m, ξ | q q π p q q “ π p q, ξ | m q π p m q If the arguments coincide with the probability density function, we drop the subscripts. For example, wewrite π q p q q “ π p q q , π m | q p m | q q “ π p m | q q , but retain the subscript in π e p m ´ F p q q ´ ζ q . q is then given by π p q | m q “ ż π p q, ξ | m q d ξ. Moreover, π p q | m q9 π p m | q q π p q q , where means “proportional to”. Point estimates of π p q | m q are often used as the solution of the Bayesian method. Two popular statistical point estimatesare the maximum a posterior estimate (MAP) q MAP “ arg max q π p q | m q and the conditional mean (CM) q CM “ E p π p q | m qq . In (4.1), m “ t u p x, t qu P Y “ R N x ˆ N p is the noisy measurement data collected on Γ ˆ T j , j “ , , ¨ ¨ ¨ , J , where N x and N p are the numbers of observation positions and times. Theunknown parameter q P X “ R , i.e., the position of the point source on the time period T j , isassumed to be static due to (2.2). The space Y is equipped with the Frobenius norm } A } Y “ ´ř N p i “ ř N x j “ | a ij | ¯ for A P Y . In this paper, the Markov chain Monte Carlo (MCMC) methodis employed to explore the posterior distribution π p q | m q using the Bayesian approximation errorapproach and the CM is adopted as the point estimate for q . The main idea of the Bayesian approximation error (BAE) approach is to replace the accurateforward model by a less accurate but computationally feasible one [12–14]. Let ¯ q “ z p t q and ¯ F be the accurate forward model given by (2.3). We consider the following statistical model forthe TISP m “ ¯ F p ¯ q q ` e, (4.2)where e is the additive error. Furthermore, we assume the noise e follows a normal distributionwith mean e ˚ and variance Σ ee , i.e., π e p e q “ N p e ˚ , Σ ee q . Here we take e ˚ “ t P p , T s , which is inaccessible. Fortunately, the exact path is not necessary for theBAE. One proceeds as follows. Let P be a projection operator and q “ P ¯ q . Instead of theaccurate forward model ¯ F p ¯ q q , one uses the following approximate forward operator F p q q “ λ p t ´ c ´ } x ´ q }q π } x ´ q } , (4.3)which is computationally cheaper and more approachable. Then the statistical model can bewritten as m “ ¯ F p ¯ q q ` e “ F p q q ` ζ ` e, where ζ “ ¯ F p ¯ q q ´ F p q q is the approximation error. Due to the fact that } v p t q} ! c and (2.5), itis sufficient to study the well-posedness for the approximate forward operator F p q q .Assume that e is separately independent of q and ζ . According to [13, 14], the posteriordistribution satisfies π p q | m q 9 π p m | q q π p q q “ ż Y π e p m ´ F p q q ´ ζ q π p ζ | q q d ζ π p q q .
6n the approximation error approach, π p ζ | q q is approximated with a normal distribution. Assumethat the normal approximation of the joint distribution π p ζ, q q is given by π p ζ, q q 9 exp ´ ˆ ζ ´ ζ ˚ q ´ q ˚ ˙ J ˆ Σ ζζ Σ ζq Σ qζ Σ qq ˙ ´ ˆ ζ ´ ζ ˚ q ´ q ˚ ˙+ , Then we can write ζ | q „ N p ζ ˚| q , Σ ζ | q q , where ζ ˚| q “ ζ ˚ ` Σ ζq Σ ´ qq p q ´ q ˚ q , Σ ζ | q “ Σ ζζ ´ Σ ζq Σ ´ qq Σ qζ . Define the normal random variable w so that w “ e ` ζ | q . It holds that w | q „ N p w ˚| q , Σ w | q q , where w ˚| q “ e ˚ ` ζ ˚ ` Σ ζq Σ ´ qq p q ´ q ˚ q , Σ w | q “ Σ ee ` Σ ζζ ´ Σ ζq Σ ´ qq Σ qζ . The approximate posterior distribution can be written as π p q | m q 9 exp ˆ ´ }p m ´ F p q q ´ w ˚| q q J Σ ´ w | q p m ´ F p q q ´ w ˚| q q} Y ˙ π p q q , Assume that µ m is absolutely continuous with respect to µ q , i.e., µ m ! µ q . Using Bayes’ formula,we have that d µ m d µ q p q q “ L p m q exp p´ G p q ; m qq , where L p m q : “ ż X exp ˆ ´ }p m ´ F p q q ´ w ˚| q q J Σ ´ w | q p m ´ F p q q ´ w ˚| q q} Y ˙ d µ q p q q . In this pa-per, the prior is chosen to be a normal distribution, i.e., q „ N p q ˚ , Σ qq q . We note that in practiceother prior distributions can be used as well.We now study the properties of the operator F following [21]. Lemma 1.
For every ε ą , there exists C : “ C p ε q P R such that, for all q P X , } F p q q} Y ď exp p ε } q } X ` C q . Proof.
Since λ p t q P C r , T s , we have | λ p t q| ď C . Define d “ inf t} y ´ y } : y P Γ , y P D u . By(4.3), it holds that | F p q q| ď | λ p t ´ c ´ } x ´ q }q| πd ď C πd . Therefore, } F p q q} Y ď exp p ε } q } X ` C q and the proof is complete. 7 emma 2. For every r ą there exists a C : “ C p r q ą such that, for all q , q P X with max t} q } X , } q } X u ă r , } F p q q ´ F p q q} Y ď C } q ´ q } X . Proof.
Define ˜ d “ sup t} y ´ y } : y P Γ , y P D u . Since λ p t q P C r , T s , we have | λ p t q ´ λ p t q| ď C | t ´ t | , @ t , t P p , T q , for some constant C . Consequently, | F p q q ´ F p q q| “ ˇˇˇˇ λ p t ´ c ´ } x ´ q }q π } x ´ q } ´ λ p t ´ c ´ } x ´ q }q π } x ´ q } ˇˇˇˇ “ ˇˇˇˇ λ p t ´ c ´ } x ´ q }q} x ´ q } ´ λ p t ´ c ´ } x ´ q }q} x ´ q } π } x ´ q }} x ´ q } ˇˇˇˇ ď ˇˇˇˇ λ p t ´ c ´ } x ´ q }q} x ´ q } ´ λ p t ´ c ´ } x ´ q }q} x ´ q } πd ˇˇˇˇ ` ˇˇˇˇ λ p t ´ c ´ } x ´ q }q} x ´ q } ´ λ p t ´ c ´ } x ´ q }q} x ´ q } πd ˇˇˇˇ ď C ˇˇˇˇ } x ´ q } ´ } x ´ q } πd ˇˇˇˇ ` C ˇˇˇˇ } x ´ q } ´ } x ´ q } πd ˇˇˇˇ c ˜ d ď } q ´ q } X πd ˆ C ` c C ˜ d ˙ . It yields that } F p q q ´ F p q q} Y ď C } q ´ q } X . Definition 3.
The Hellinger distance between two probability measures µ and µ with commonreference measure ν is defined as d Hell p µ , µ q “ ˆż ´a d µ { d ν ´ a d µ { d ν ¯ d ν ˙ { . Theorem 4. [Page 530 of [21]
The Fernique theorem ] If µ is a Gaussian measure with mean on a Banach space X so that µ p X q “ , then there exists α ą such that ż X exp ` α } x } X ˘ µ p d x q ă 8 . The following theorem provides the well-posedness of the Bayesian approximation error ap-proach.
Theorem 5.
Assume that µ q is a Gaussian measure satisfying µ q p X q “ and µ m ! µ q . For m and m with max t} m } Y , } m } Y u ď r , there exists M “ M p r q ą such that d Hell p µ m , µ m q ď M } m ´ m } Y . roof. Given L p m q “ ż X exp ˆ ´ }p m ´ F p q q ´ w ˚| q q J Σ ´ w | q p m ´ F p q q ´ w ˚| q q} Y ˙ d µ q p q q , it is obvious that 0 ď L p m q ď . (4.4)According to Lemma 1, we have that L p m q ě ż } q } X ď exp ´ ´ ˇˇˇ Σ ´ w | q ˇˇˇ } m ´ w ˚| q } Y ´ ˇˇˇ Σ ´ w | q ˇˇˇ } F p q q} Y ¯ d µ q p q qě ż } q } X ď exp p´ M q d µ q p q q“ exp p´ M q µ q t} q } X ď uą , (4.5)since µ q is a Gaussian measure.Using the Fernique Theorem (Theorem 4) and Lemma 1, for µ q , it holds that | L p m q ´ L p m q| ď ż X | exp p´ G p q ; m qq ´ exp p´ G p q ; m qq| d µ q p q q . ď ż X | G p q ; m q ´ G p q ; m q| d µ q p q qď ż X ˇˇˇ Σ ´ w | q ˇˇˇ ˇˇ } m ´ F p q q ´ w ˚| q } Y ´ } m ´ F p q q ´ w ˚| q } Y ˇˇ d µ q p q qď ż X ˇˇˇ Σ ´ w | q ˇˇˇ `ˇˇ } m } Y ´ } m } Y ˇˇ ` } F p q q ` w ˚| q } Y } m ´ m } Y ˘ d µ q p q qď ż X ˇˇˇ Σ ´ w | q ˇˇˇ ` } m } Y ` } m } Y ` } F p q q ` w ˚| q } Y ˘ d µ q p q q} m ´ m } Y ď M } m ´ m } Y . (4.6)From the definition of the Hellinger distance, we obtain that d p µ m , µ m q “ ż X exp p´ G p q ; m qq L p m q ˙ { ´ ˆ exp p´ G p q ; m qq L p m q ˙ { + d µ q p q q“ ż X exp p´ G p q ; m qq L p m q ˙ { ´ ˆ exp p´ G p q ; m qq L p m q ˙ { ` ˆ exp p´ G p q ; m qq L p m q ˙ { ´ ˆ exp p´ G p q ; m qq L p m q ˙ { + d µ q p q qď L p m q ´ ż X " exp ˆ ´ G p q ; m q ˙ ´ exp ˆ ´ G p q ; m q ˙* d µ q p q q` ˇˇˇ L p m q ´ { ´ L p m q ´ { ˇˇˇ ż X exp p´ G p q ; m qq d µ q p q q . (4.7)9ith the Fernique theorem and Lemma 1, it holds that ż X " exp ´ ´ G p q ; m q ¯ ´ exp ´ ´ G p q ; m q ¯* d µ q p q qď ż X ˇˇˇ G p q ; m q ´ G p q ; m q ˇˇˇ d µ q p q qď ˇˇˇ Σ ´ w | q ˇˇˇ ż X ˇˇˇ } m ´ F p q q ´ w ˚| q } Y ´ } m ´ F p q q ´ w ˚| q } Y ˇˇˇ d µ q p q qď M } m ´ m } Y . (4.8)Using the bounds on L p m q and L p m q , we have that ˇˇˇ L p m q ´ { ´ L p m q ´ { ˇˇˇ ď M max ´ L p m q ´ , L p m q ´ ¯ | L p m q ´ L p m q| . (4.9)Combining (4.4)-(4.9), we that d Hell p µ m , µ m q ď M } m ´ m } . We are now ready to present the MCMC method to reconstruct the trajectory z p t q byincorporating in the priors the information obtained by the ADSM. ADSM-MCMC:
1. Given p , J , and measured data u p x l , t nj q on U T .2. For j “ , . . . , J , doa. Use the ADSM to find y j for T j and set q p q j “ y j .b. For k “ , ¨ ¨ ¨ , K ´
1, doI. Generate ˜ q “ βy j ` p ´ β q q j ´ ` σ N p , q if j ą p β “ j “ q ;II. Compute α p ˜ q, q p k q j q “ min ¨˝ , π p ˜ q q π ´ q p k q j ¯ ˛‚ ;III. Draw ˜ α „ U p , q . If α ą ˜ α , then q p k ` q j “ ˜ q , otherwise q p k ` q j “ q p k q j .c. Set q j “ K K ÿ k “ q p k q j .3. Output the moving path t q j u Jj “ .We remark that how to use of the locations obtained by the ADSM is problem-dependent.For example, if the target moves rather slow, one can set β “
0. If the target moves fast, onecan choose β “
1. 10
Numerical Examples
In this section, we will present several numerical examples to demonstrate the performance ofthe proposed method. On each T j , we choose λ p t q to be the Ricker wavelet, which has beenwidely used in geophysics (see, e.g., [23]), λ p t q “ ` ´ π f p t ´ p { q ˘ exp ` ´ π f p t ´ p { q ˘ , t P r , p s , (5.1)where f P R ` is the central frequency. The synthetic data u p x, t q is generated using (2.4) withsome noises: u ε p x, t q “ u p x, t qp ` εr q , where ε is the noise level and r is a random number from the uniform distribution U r´ , s .For all examples, we set c “ T “ p “ .
1. The number ofequidistant time steps in one period is N p . Since p , T s “ Ť Jj “ T j , where J “ T { p , we have thetime increment ∆ t “ p { N p and N T “ J N p . The time discretization is t nj “ pp j ´ q N p ` n q ∆ t , n “ , , ¨ ¨ ¨ , N p , j “ , ¨ ¨ ¨ , J . Using the spherical coordinates p R, θ, η q x : “ R p sin η cos θ, sin η sin θ, cos η q , the measurement aperture Γ is a patch on the sphere with radius R “ θ P p , π s , η P p , π s .Assume that three measurement data sets are given by t u p x, t q : p x, t q “ p θ, η, t q P S i , i “ , , u , where (see Fig 2) S “ ! π l, l “ , ¨ ¨ ¨ , ) ˆ ! π s, s “ , , , ) ˆ t t nj u , N x “ , N p “ ,S “ ! π ` π l, l “ , , ¨ ¨ ¨ , ) ˆ ! π , π ) ˆ t t nj u , N x “ , N p “ ,S “ ! π, π , π ) ˆ ! π , π ) ˆ t t nj u , N x “ , N p “ . Note that the spacial coverage of S is the whole sphere corresponding to the case of full aperturedata. The spacial coverage is 1 { S and 1 { S . In fact, for S , there arejust 6 measurement locations.Let the central frequency f “
100 for the temporal function λ p t q in (5.1) in one period (seeFig 2(a)). The measurement locations are showed in Fig 2(b)-(d) for S , S and S , respectively.We take 101 ˆ ˆ
101 sampling points, denoted by T h , uniformly distributed in the samplingdomain D “ r´ , s . For j “ , , ¨ ¨ ¨ , J , the discrete version of the indicator function (3.2)can be written as I p y, T j q : “ N p ř n “ N x ř l “ | u p x l , t nj q φ p x l , t nj ; y q| N p ř n “ ˆ N x ř l “ u p x l , t nj q ˙ { N p ř n “ ˆ N x ř l “ φ p x l , t nj ; y q ˙ { , y P T h . The ADSM reconstructs the locations of the source as the global maximum points y j of thediscrete indicator function I p y, T j q for y P T h . The sequence t y j u Jj “ is the reconstructed path11 t -0.500.51 ( t ) Ricker wavelet (t) (a) (b)(c) (d)
Fig 2. The pulse function and measurement locations. (a) the temporal function λ p t q ; (b)-(d)measurement locations for S , S and S . z p t q by the ADSM. For partial data, the reconstructed path by the ADSM can be unsatisfactory.The result becomes worse for fewer measurement data as we shall see.Based on the information obtained by the ADSM, the Bayesian method is employed toimprove the reconstructions. We set w | q „ N p ´ N x ˆ , ´ I N x ˆ N x q , where N x ˆ is a N x -by-1 vector of ones and I N x ˆ N x is the N x -by- N x identity matrix. For each T j , K “ The exact moving path of a point source is given by (Fig. 3 (d)) z p t q “ p . ` p ´ t q , ` p ` t q , . ´ p t { qq , t P r , T s . We take Σ qq “ . I ˆ as the covariance of the prior distribution. The prior mean q , ˚ for j “ y which is obtained from the ADSM at T . At step j “ , ¨ ¨ ¨ , J , the prior mean q j, ˚ isselected to be the CM of the Markov chain t q kj ´ u Kk “ in the previous step, i.e. q j, ˚ “ q j ´ .The indicator functions I p y, T q using the measured data on S , S , S are shown in Fig 3(a)-(c). The reconstructed path using the ADSM and the ADSM-MCMC are presented inFig. 3(e)-(h) and (i)-(l), respectively. When the observation data is sufficient (the full aperturecase S ), the ADSM yields a satisfactory reconstruction as shown in Fig. 3(e). It can be seen12hat, when the measured data decrease, the reconstructions of both the ADSM and the ADSM-MCMC deteriorate (the second row (e)-(h) and the third row (i)-(l)). Comparing (f) and (j), (g)and (k), or (h) and (l), the ADSM-MCMC significantly outperforms the ADSM, in particular,when the measurement data are less. Moreover, the Bayesian inference is robust to noises asindicated by Fig. 3(k) and (l). The reconstructions are quite satisfactory for noise levels ε “ p x, t q P S . (a) (b) (c) (d)(e) (f) (g) (h)(i) (j) (k) (l) Fig 3. Reconstructions of a C-shape path. (a)-(c) are cross-section plots of the indicator func-tions I p y, T q . (d) is the exact trajectory z p t q . (e)-(g) are the reconstructions by the ADSMusing data on S , S , S with ε “ S with ε “ S , S , S with ε “ S with ε “ z , z s , and z b represent the exact path, the reconstruction by the ADSM, and the reconstructionby the ADSM-MCMC, respectively. We consider the path given by (Fig. 4 (d)) z p t q “ p ´ . t, . ` . p . t q , ´ . ´ . p . t qq , t P r , T s . qq “ . I ˆ is used. Fig. 4 show the reconstructions. Again, Fig. 4 (a)-(c)shows the plots of the indicator functions using the measured data for S , S , S , respectively.The recovered moving paths by the ADSM are given in Fig. 4 (e)-(h), which suggest that theADSM can provide useful information, but far from accurate when the measured data are partial.In the second step, the Bayesian method refines the paths based on the information obtainedby the ADSM. Similar to the previous example, as the measurement data become less, theperformance of the ADSM is getting worse (Fig. 4 (f)-(h)). The results of the ADSM-MCMCFig. 4 (j)-(l) are significantly better than those produced by the ADSM. (a) (b) (c) (d)(e) (f) (g) (h)(i) (j) (k) (l) Fig 4. Reconstruction of a bow-shape path. (a)-(c) are the cross-section plots of the indicatorfunctions I p y, T q . (d) is the exact trajectory z p t q . (e)-(g) are the reconstructions by the ADSMusing data on S , S , S with ε “ S with ε “ S , S , S with ε “ S with ε “ z , z s , and z b represent the exact path, the reconstruction by the ADSM, and the reconstructionby the ADSM-MCMC, respectively. We consider the case of two moving point sources which are well-separated. Assume that thesignal functions of these sources are the same and their exact moving paths are given by (Fig. 514d)) z p t q “p ´ p ´ . t q , ` p ` t q , q ,z p t q “p´ , ´ ` . t, . q . Since (2.1a) is linear with respect to the sources, the proposed method works similar tothe case of a single point source. To be precise, the source term of (2.1a) is the superposition λ p t q δ p x ´ z p t qq ` λ p t q δ p x ´ z p t qq . Correspondingly, the exact solution to (2.1) is now given by u p x, t ; z p t qq “ u p x, t ; z p t qq ` u p x, t ; z p t qq , where u and u are given by (2.3) with z p t q replaced by z p t q and z p t q , respectively.The covariance is chosen to be Σ qq “ . I ˆ . The ADSM reconstructs two local maximumsin Fig. 5 (a)-(c), indicating the presence of two point sources. Similar to the previous twoexamples, when the measured data are complete or almost complete, the ADSM can providesatisfactory reconstructions. The performance is getting worse as the data become less (Fig. 5(f)-(h)). The ADSM-MCMC improves the reconstructions significantly (Fig. 5 (j)-(l)). A deterministic-statistical approach is proposed to reconstruct the moving sources using partialdata. The approach contains two steps. The first step is a deterministic method to obtain somequalitative information of the unknowns. The second step is the Bayesian inversion with theprior containing the information obtained in the first step. Both steps are based on the samephysical model and use the same set of measured data. Numerical results show that it is apromising technique for tracking the moving point sources using partial data.The reconstruction by the ADSM deteriorates significantly as the data become fewer. How-ever, it still contains useful information of the sources. Coded in the prior, such information iscritical for the success of the MCMC. One can use the uniform prior and perform the Bayesianinverse directly. In Fig. 6, we show the reconstructed path of the moving source for S and ε “
1% with 5000 samples which follow the uniform distribution U r´ , s . It can be seen thatthe reconstructions in in Fig. 6 (c-d) are much worse than Fig. 3 (i) and (k). Acknowledgement
The second author is supported by NSFC grants 11971133 and 11601107. The collaborationstarted when the second and third authors attended the 2018 Workshop on Inverse Problemsand Related Topics in Chengdu funded in part by NSFC grant 11771068.
References [1] C.E. Baum, ed., Detection and Identification of Visually Obscured Targets, Taylor & Fran-cis, 1999.[2] J.L. Buchanan, R.P. Gilbert, A. Wirgin and Y.S. Xu, Marine Acoustics. Direct and InverseProblems. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2004.15 a) (b) (c) (d)(e) (f) (g) (h)(i) (j) (k) (l)
Fig 5. Reconstruction of two paths. (a)-(c) are the cross-section plots of the indicator functions I p y, T q . (d) is the exact trajectory z p t q . (e)-(g) are the reconstructions by the ADSM usingdata on S , S , S with ε “ S with ε “ S , S , S with ε “ S with ε “ z , z s ,and z b represent the exact paths, the reconstruction by the ADSM, and the reconstruction bythe ADSM-MCMC, respectively.[3] B. Chen, Y. Guo, F. Ma and Y. Sun, Numerical schemes to reconstruct three-dimensionaltime-dependent point sources of acoustic waves. Inverse Problems , 36(7), (2020): 075009.[4] M. Cheney and B. Borden, Fundamentals of Radar Imaging. CBMS-NSF Regional Confer-ence Series in Applied Mathematics, 79. Society for Industrial and Applied Mathematics(SIAM), Philadelphia, PA, 2009.[5] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, 4thedition, Applied Mathematical Sciences, 93. Springer, Cham, 2019.[6] J. Garnier and M. Fink, Super-resolution in time-reversal focusing on a moving source,
Wave Motion , 53, (2015), 80–93. 16 a) (b) (c) (d)
Fig 6. Reconstruction of the C-shape path: (a-b) histograms of the samples t q p k q u Kk “ withnormal and uniform prior for S , respectively; (c-d) reconstructions by MCMC with uniformprior for S , S , ε “ q and z u is path reconstructedby the uniform prior.[7] Y. Guo, D. H¨omberg, G. Hu, J. Li and H. Liu. A time domain sampling method for inverseacoustic scattering problems. J. Comput. Phys. , 314, (2016): 647–660.[8] Y. Guo, P. Monk, D. Colton. Toward a time domain approach to the linear samplingmethod.
Inverse Problems , 29, (2013): 095016.[9] Y. Guo, P. Monk, D. Colton, The linear sampling method for sparse small aperture data,
Appl. Anal. , 95 (8), (2016): 1599–1615.[10] G. Hu, Y. Kian, P. Li and Y. Zhao, Inverse moving source problems in electrodynamics.
Inverse Problems , 35(7), (2019): 075001.[11] J.D. Jackson, Classical Electrodynamics. John Wiley & Sons, 2007.[12] J.P. Kaipio and E. Somersalo. Statistical and Computational Inverse Problems. AppliedMathematical Sciences, 160. Springer-Verlag, New York, 2005.[13] J.P. Kaipio, T. Huttunen, T. Luostari, T. L¨ahivaara and P.B. Monk, A Bayesian approachto improving the Born approximation for inverse scattering with high-contrast materials.
Inverse Problems , 35(8), (2019): 084001.[14] V. Kolehmainen, T. Tarvainen, S.R. Arridge and J.P. Kaipio, Marginalization of uninter-esting distributed parameters in inverse problems-application to diffuse optical tomography.
Int. J. Uncertain. Quantif. , 1(1), (2011): 1–17.[15] J. Li, Z. Zeng, J. Sun and F. Liu, Through-wall detection of human being’s movement byUWB radar,
IEEE Geoscience and Remote Sensing Letters , 9(6), (2012): 1079–1083.[16] Z. Li, Z. Deng, and J. Sun, Extended-sampling-Bayesian method for limited aperture inversescattering problems.
SIAM J. Imaging Sci. arXiv:2004.04609 , 2020.[18] J. Liu, Y. Liu and J. Sun, An inverse medium problem using Stekloff eigenvalues and aBayesian approach.
Inverse Problems , 35(9), (2019): 094004.1719] X. Liu and J. Sun, Data recovery in inverse scattering: from limited-aperture to full-aperture.
J. Comput. Phys. , 386, (2019): 350–364.[20] E. Nakaguchi, H. Inui and K. Ohnaka. An algebraic reconstruction of a moving point sourcefor a scalar wave equation.
Inverse Problems , 28(6), (2012): 065018.[21] A. M. Stuart, Inverse problems: a Bayesian perspective,
Acta Numer. , 19, (2010): 451–559.[22] O. Takashi. Real-time reconstruction of moving point/dipole wave sources from boundarymeasurements.
Inverse Probl. Sci. Eng. , 28(8), (2020): 1057–1102.[23] K. Wang, Z. Zeng and J. Sun. Through-wall detection of the moving paths and vital signsof human beings.
IEEE Geoscience and Remote Sensing Letters , 16(5), (2018): 717–721.[24] X. Wang, Y. Guo, J. Li and H. Liu, Mathematical design of a novel input/instruction deviceusing a moving acoustic emitter.
Inverse Problems , 33(10), (2017): 105009.[25] X. Wang, Y. Guo, J. Li and H. Liu, Two gesture-computing approaches by using electro-magnetic waves.
Inverse Problems & Imaging , 13(4), (2019): 879–901.[26] Z. Yang, X. Gui, J. Ming, G. Hu, Bayesian approach to inverse time-harmonic acousticscattering with phaseless far-field data.