Modified Tikhonov regularization for identifying several sources
MModified Tikhonov regularization for identifyingseveral sources
Ole Løseth Elvetun ∗ and Bjørn Fredrik Nielsen † November 10, 2020
Abstract
We study whether a modified version of Tikhonov regularizationcan be used to identify several local sources from Dirichlet boundarydata for a prototypical elliptic PDE. This paper extends the resultspresented in [5]. It turns out that the possibility of distinguishingbetween two, or more, sources depends on the smoothing propertiesof a second or fourth order PDE. Consequently, the geometry of theinvolved domain, as well as the position of the sources relative to theboundary of this domain, determines the identifiability.We also present a uniqueness result for the identification of a singlelocal source. This result is derived in terms of an abstract operatorframework and is therefore not only applicable to the model problemstudied in this paper.Our schemes yield quadratic optimization problems and can thusbe solved with standard software tools. In addition to a theoreticalinvestigation, this paper also contains several numerical experiments.
Keywords:
Inverse source problems, PDE-constrained optimization, Tikhonovregularization, nullspace, numerical computations.
We will study the following problem:min ( f,u ) ∈ F h × H (Ω) (cid:26) (cid:107) u − d (cid:107) L ( ∂ Ω) + 12 α (cid:107) W f (cid:107) L (Ω) (cid:27) (1) ∗ Faculty of Science and Technology, Norwegian University of Life Sciences, P.O. Box5003, NO-1432 ˚As, Norway. Email: [email protected]. † Faculty of Science and Technology, Norwegian University of Life Sciences, P.O. Box5003, NO-1432 ˚As, Norway. Email: [email protected]. Nielsen’s work was sup-ported by The Research Council of Norway, project number 239070. a r X i v : . [ m a t h . O C ] N ov ubject to − ∆ u + (cid:15)u = f in Ω ,∂u∂ n = 0 on ∂ Ω , (2)where F h is a finite dimensional subspace of L (Ω), W : F h → F h is alinear regularization operator, α is a regularization parameter, d representsDirichlet boundary data, (cid:15) is a positive constant, n denotes the outwardspointing unit normal vector of the boundary ∂ Ω of the bounded domainΩ, and f is the source. Depending on the choice of W , we obtain differentregularization terms, including the standard version W = I (the identitymap).The purpose of solving (1)-(2) is to estimate the unknown source f fromthe Dirichlet boundary data u = d on ∂ Ω. Mathematical problems similarto this occur in numerous applications, e.g., in EEG investigations and inthe inverse ECG problem, and has been studied by many scientists, see, e.g.,[1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. A more detailed description ofprevious investigations is presented in [5].In [5] we showed with mathematical rigor that a particular choice of W almost enables the identification of the position of a single local sourcefrom the boundary data. That paper also contains numerical experimentssuggesting that two or three local sources, in some cases, can be recovered.The purpose of this paper is to explore the several sources situation in moredetail, both theoretically and experimentally. Moreover, we prove that ourparticular choice of W , which will be presented below, enables the precise recovery of a single local source. Let us consider the abstract operator equation K h x = b , (3)where K h : X → Y is a linear operator with a nontrivial nullspace andpossibly very small singular values, X and Y are real Hilbert spaces, X isfinite dimensional and b ∈ Y . (For the problem (1)-(2), K h is the forwardoperator K h : F h → L ( ∂ Ω) , f (cid:55)→ u | ∂ Ω , F h is a finite dimensional subspace of L (Ω), and u is the uniquesolution of the boundary value problem (2) for a given f .)Applying traditional Tikhonov regularization yields the approximation x α = arg min x (cid:26) (cid:107) K h x − b (cid:107) Y + 12 α (cid:107) x (cid:107) X (cid:27) , (4)and, according to standard theory, the minimum norm least squares solution x ∗ of (3) satisfies x ∗ = lim α → x α = K h † b ∈ N ( K h ) ⊥ , where N ( K h ) ⊥ denotes the orthogonal complement of the nullspace N ( K h )of K h , and K h † represents the Moore-Penrose inverse of K h .Throughout this paper we assume that B = { φ , φ , . . . , φ n } is an orthonormal basis for X and that K h ( φ i ) (cid:54) = cK h ( φ j ) for i (cid:54) = j and c ∈ R . (5)That is, the images under K h of the basis functions are not allowed to beparallel. Note that (5) asserts that none of the basis functions belong to thenullspace N ( K h ) of K h . (For PDE-constrained optimization problems onecan, e.g., choose basis functions with local support. We will return to thismatter in subsection 2.2.)Throughout this text, P : X → N ( K h ) ⊥ (6)denotes the orthogonal projection of elements in X onto N ( K h ) ⊥ . In [5]we investigated whether a single basis function φ j can be recovered from itsimage K h φ j . More specifically, using the fact that K h † K h = P , we observethat the minimum norm least squares solution x ∗ j of K h x = K h φ j (7)is x ∗ j = K h † ( K h φ j ) = P φ j . (8) Since K h has a nontrivial nullspace, it is by no means obvious that φ j can be recoveredfrom its image K h ( φ j ). W : X → X is defined by W φ i = (cid:107) P φ i (cid:107) X φ i for i = 1 , , . . . , n, (9)it follows from (8), the orthonormality of the basis B = { φ , φ , . . . , φ n } andbasic properties of orthogonal projections that W − x ∗ j = W − P φ j = W − n (cid:88) i =1 ( P φ j , φ i ) X φ i = W − n (cid:88) i =1 ( P φ j , P φ i ) X φ i = n (cid:88) i =1 (cid:18) P φ j , P φ i (cid:107) P φ i (cid:107) X (cid:19) X φ i = (cid:107) P φ j (cid:107) X n (cid:88) i =1 (cid:18) P φ j (cid:107) P φ j (cid:107) X , P φ i (cid:107) P φ i (cid:107) X (cid:19) X φ i . (10)Consequently, the minimum norm least squares solution x ∗ j of (7) is suchthat j ∈ arg max i ∈{ , ,...,n } (cid:0) W − x ∗ j ( i ) (cid:1) , (11)where W − x ∗ j ( i ) denotes the i ’th component of the Euclidean vector [ W − x ∗ j ] B ∈ R n . This implies that we almost can recover the basis function φ j from itsimage K h φ j : Compute the minimum norm least squares solution x ∗ j of (7).Then j is among the indexes for which W − x ∗ j attains its maximum. For fur-ther details, see Theorem 4.2 in [5]. We write almost because the maximumcomponent of [ W − x ∗ j ] B may not be unique.Based on these findings, we defined Method I in [5] as: Compute W − x α , (12)where x α is the outcome of applying standard Tikhonov regularization (4),and the operator W is defined in (9). (Assume that φ j is a basis functionwith local support. Then the discussion above shows that a local sourceequaling φ j almost can be recovered by Method I from its image K h φ j .) We will now show that we can replace ” ∈ ” in (11) with equality if (5) holds,i.e., the maximum component of [ W − x ∗ j ] B is unique.4 heorem 2.1. Assume that the basis B for X is orthonormal and that (5) holds. Then the minimum norm least squares solution x ∗ j of (7) is such that j = arg max i ∈{ , ,...,n } (cid:0) W − x ∗ j ( i ) (cid:1) , where W is defined in (9) .Proof. Recall the expression (10) for W − x ∗ j and the definition (6) of theprojection P . From the Cauchy-Schwarz inequality we know that (cid:12)(cid:12)(cid:12)(cid:12)(cid:18) P φ j (cid:107) P φ j (cid:107) X , P φ i (cid:107) P φ i (cid:107) X (cid:19) X (cid:12)(cid:12)(cid:12)(cid:12) ≤ , with equality if, and only if, there is a constant c such that P φ i = c P φ j .Let P N : X → N ( K h ) (13)denote the orthogonal projection mapping elements of X onto N ( K h ), i.e., P N = I − P , where I is the identity mapping. Assume that P φ i = c P φ j for some c ∈ R .The orthogonal decomposition φ i = P φ i + P N φ i yields that K h φ i = K h ( P φ i ) + K h ( P N φ i )= K h ( P φ i )= K h ( c P φ j )= c K h ( P φ j )= c K h ( P φ j + P N φ j )= c K h ( φ j ) . We have thus proved the implication P φ i = c P φ j ⇒ K h φ i = c K h φ j . (14)Consequently, if (5) holds, then (cid:12)(cid:12)(cid:12)(cid:12)(cid:18) P φ j (cid:107) P φ j (cid:107) X , P φ i (cid:107) P φ i (cid:107) X (cid:19) X (cid:12)(cid:12)(cid:12)(cid:12) < i (cid:54) = j. The result now follows from (10). 5his result only shows that we can recover the individual basis function φ j from its image K h φ j . Nevertheless, the numerical experiments in [5]indicate that Method I also is capable of identifying more general localsources from Dirichlet boundary data. We will discuss this issue in moredetail in subsection 2.1.3 below. Remark
We mention that the opposite implication of (14) also holds: Assume that K h φ i = c K h φ j . Then, see (8), P φ i = K h † ( K h φ i )= K h † ( c K h φ j )= c K h † ( K h φ j )= c P φ j , which together with (14) leads to the conclusion P φ i = c P φ j ⇐⇒ K h φ i = c K h φ j . Since Theorem 2.1 asserts that the maximum component of [ W − x ∗ j ] B isunique, it makes sense to use the linearity of the problem to extend Theorem2.1 to cases involving several basis functions: Corollary 2.1.1.
Let { j , j , . . . , j r } ⊂ { , , . . . , n } be an index set andassume that (5) holds. Then the minimum norm least squares solution x ∗ of K h x = K h ( φ j + φ j + · · · + φ j r ) (15) satisfies W − x ∗ = W − x ∗ j + W − x ∗ j + · · · + W − x ∗ j r , (16) where j q = arg max i ∈{ , ,...,n } (cid:16) W − x ∗ j q ( i ) (cid:17) , W − x ∗ j q = (cid:107) P φ j q (cid:107) X n (cid:88) i =1 (cid:18) P φ j q (cid:107) P φ j q (cid:107) X , P φ i (cid:107) P φ i (cid:107) X (cid:19) X φ i (17) for q = 1 , , . . . , r . Here, x ∗ j q denotes the minimum norm least squaressolution of K h x = K h φ j q for q = 1 , , . . . , r. roof. Since x ∗ j q = K h † ( K h φ j q ), the linearity of K h , K h † and W imply that W − x ∗ = W − K h † ( K h φ j ) + W − K h † ( K h φ j ) + · · · + W − K h † ( K h φ j r )= W − x ∗ j + W − x ∗ j + · · · + W − x ∗ j r , and the result therefore follows from Theorem 2.1 and (10).Roughly speaking, Corollary 2.1.1 shows that W − x ∗ can be written asa sum (16) of vectors which achieve their maximums for the correct indices.Consequently, if the index subsets associated with the significantly sizedcomponents of the Euclidean vectors[ W − x ∗ j ] B , [ W − x ∗ j ] B , . . . , [ W − x ∗ j r ] B (18)are disjoint, then this corollary shows that we can recover all the vectors φ j , φ j , . . . , φ j r from K h ( φ j + φ j + · · · + φ j r ). This indicates that MethodI in many cases should be able to identify several sources. Nevertheless, thecontent of the vectors (18) depends on the projection P , see (17) and (6),and the properties of this projection is problem dependent. Below we willexplore this issue in more detail for our model problem (1)-(2). So far we have only studied local sources consisting of a single basis function.Let us now consider a local source which is a sum of several basis functions,i.e., f = a φ j + a φ j + · · · + a r φ j r , (19)where a , a , . . . , a r are constants. Can we roughly recover such a sourcefrom its image K h f ?As in the analysis leading to Corollary 2.1.1, we find that the minimumnorm least squares solution x ∗ of K h x = K h ( a φ j + a φ j + a · · · + a r φ j r )is such that W − x ∗ = a W − x ∗ j + a W − x ∗ j + · · · + a r W − x ∗ j r . (20)Consequently, if φ j , φ j , . . . , φ j r are basis functions with neighboring lo-cal supports, then a W − x ∗ j , a W − x ∗ j , . . . , a r W − x ∗ j r will all achieve theirmaximums (or minimums) in these neighboring supports. We thus expectthat W − x ∗ roughly will recover the composite local source (19).7f the right-hand-side b in (3) does not belong to the range of K h , thenthe analysis of the potential recovery of a local source becomes even moreinvolved. Typically, one would consider the problem K h x = ˆ b , where ˆ b represents the orthogonal projection of b onto the range of K h .Assuming that there exists a composite function f in the form (19) suchthat K h f = ˆ b , (21)the discussion above suggests that W − x ∗ can yield an approximation of f .Here, x ∗ is the minimum norm least squares solution of K h x = ˆ b (= K h ( a φ j + a φ j + a · · · + a r φ j r )) . Whether W − x ∗ also yields an approximation of the true local source, andnot only the function f satisfying (21), will definitely depend on how ”close” b is to the range of K h and the ill-posed nature of (3).In the numerical experiments section below we primarily study caseswhere the right-hand-side b in (3) does not belong to the range of K h : Thesynthetic observation data d in (1) was generated by solving the forwardproblem on a finer grid than was used in the inversion process. We will now study the PDE-constrained optimization problem (1)-(2). Letus discretize the unknown source f in terms of the basis functions φ i = 1 (cid:107)X Ω i (cid:107) L (Ω) X Ω i = A − / X Ω i , i = 1 , , . . . , n, (22)where Ω , Ω , . . . , Ω n are uniformly sized disjoint grid cells, X Ω i denotes thecharacteristic function of Ω i and A = | Ω | = | Ω | = . . . = | Ω n | . That is, thespace X associated with (3) is X = F h = span { φ , φ , . . . , φ n } and thus consists of piecewise constant functions. (In appendix A we explainhow such basis functions also can be employed when (cid:15) = 0, i.e., when thePDE in (2) is Poisson’s equation.) Throughout this paper we assume thatΩ and the subdomains Ω , Ω , . . . , Ω n are such that (5) holds.8ote that the basis functions (22) are L -orthonormal and has localsupport. From the latter property, and the fact that P : X → N ( K h ) ⊥ is anorthogonal projection, it follows that we can write (10) in the form W − x ∗ j = n (cid:88) i =1 (cid:107) P φ i (cid:107) L (Ω) ( P φ j , P φ i ) L (Ω) φ i = n (cid:88) i =1 (cid:107) P φ i (cid:107) L (Ω) ( P φ j , φ i ) L (Ω) φ i = A − / n (cid:88) i =1 (cid:107) P φ i (cid:107) L (Ω) (cid:90) Ω i P φ j dx φ i = A − / (cid:107) P φ j (cid:107) L (Ω) (cid:90) Ω j P φ j dx φ j + A − / n (cid:88) i =1 ,i (cid:54) = j (cid:107) P φ i (cid:107) L (Ω) (cid:90) Ω i P φ j dx φ i . Recall that P φ j ∈ N ( K h ) ⊥ ⊂ X = span { φ , φ , . . . , φ n } , and that the functions in X are piecewise constant. Consequently, (cid:90) Ω i P φ j dx = A P φ j ( z i ) for any z i ∈ Ω i , and we conclude that W − x ∗ j = A / P φ j ( z j ) (cid:107) P φ j (cid:107) L (Ω) φ j + A / n (cid:88) i =1 ,i (cid:54) = j P φ j ( z i ) (cid:107) P φ i (cid:107) L (Ω) φ i , (23)where z ∈ Ω , z ∈ Ω , . . . , z n ∈ Ω n are arbitrary points in these subdo-mains.Alternatively, we can express W − x ∗ j in terms of the projection P N ontothe nullspace N ( K h ), see (13). More specifically, from (23) and the orthog-onal decomposition φ j = P φ j + P N φ j it follows that W − x ∗ j = A / A − / − P N φ j ( z j ) (cid:113) − (cid:107) P N φ j (cid:107) L (Ω) φ j − A / n (cid:88) i =1 ,i (cid:54) = j P N φ j ( z i ) (cid:113) − (cid:107) P N φ i (cid:107) L (Ω) φ i . (24)9ere we have used the facts that φ j ( z j ) = A − / and that φ j ( z i ) = 0 for i (cid:54) = j , see (22).From Theorem 2.1, (23) and (24) it follows that P φ j ( z j ) (cid:107) P φ j (cid:107) L (Ω) > P φ j ( z i ) (cid:107) P φ i (cid:107) L (Ω) for i (cid:54) = j,A − / − P N φ j ( z j ) (cid:113) − (cid:107) P N φ j (cid:107) L (Ω) > − P N φ j ( z i ) (cid:113) − (cid:107) P N φ i (cid:107) L (Ω) for i (cid:54) = j, which show that the dominance of the j ’th component of the Euclideanvector [ W − x ∗ j ] B is determined by the projections P φ j or P N φ j of φ j onto N ( K h ) ⊥ and N ( K h ), respectively. As discussed in connection with (10) andTheorem 2.1, we can recover φ j from its image K h φ j by identifying thelargest component of [ W − x ∗ j ] B . Furthermore, the present analysis revealsthat to what degree W − x ∗ j yields a ”smeared out/blurred” approximationof φ j depends on how fast P φ j ( z i ) (or P N φ j ( z i )) decays as a function of thedistance between Ω i and Ω j . This decay also determines to what extentMethod I can identify several local sources, see Corollary 2.1.1. Motivated by the investigation presented above, we will know explore themathematical properties of the orthogonal projections P φ j and P N φ j , see (6)and (13). To this end, consider the forward operator K h : F h → L ( ∂ Ω) , f (cid:55)→ u | ∂ Ω , associated with (1)-(2). Here, u is the solution of the following variationalform of the boundary value problem (2): Determine u ∈ H (Ω) such that (cid:90) Ω ( − ∆ u + (cid:15)u ) ξ dx = (cid:90) Ω f ξ dx ∀ ξ ∈ L (Ω) ,∂u∂ n = 0 on ∂ Ω . This rather non-standard variational form is employed for the sake of sim-plicity.If we define V = (cid:26) ψ ∈ H (Ω) : − ∆ ψ + (cid:15)ψ ∈ X, ψ = ∂ψ∂ n = 0 on ∂ Ω (cid:27) , (25)10hen the nullspace of K h can be characterized as follows N ( K h ) = { q = − ∆ ψ + (cid:15)ψ, ψ ∈ V } . (26)Any function r ∈ N ( K h ) ⊥ satisfies (cid:90) Ω rq dx = 0 ∀ q ∈ N ( K h )or (cid:90) Ω r · ( − ∆ ψ + (cid:15)ψ ) dx = 0 ∀ ψ ∈ V . (27)We may, in view of Green’s formula/integration by parts, say that r is adiscrete very weak solution of − ∆ r + (cid:15)r = 0 . (28)(Any member of N ( K h ) ⊥ ⊂ X is piecewise constant, and (28) is thus notmeaningful for such functions.) We here use the word discrete because r ∈N ( K h ) ⊥ ⊂ X and X is finite dimensional.Recall that x ∗ j = P φ j ∈ N ( K h ) ⊥ , see (8) and (6), and we concludethat the minimum norm least squares solution x ∗ j of (7) is the best discreteapproximation of φ j satisfying, in a very weak sense, − ∆ P φ j + (cid:15) P φ j = 0 . (29)In other words, provided that (cid:15) = 0, P φ j is the best discrete very weakharmonic approximation of φ j .Having characterized P φ j , we turn our attention toward P N φ j . Since P N φ j belongs to N ( K h ), it has a ”generating” function τ j ∈ V , see (25)and (26): P N φ j = − ∆ τ j + (cid:15)τ j . Choosing r = P φ j = φ j − P N φ j in (27) yields that this ”generating” functionmust satisfy the following discrete weak version of a fourth order PDE: Find τ j ∈ V such that (cid:90) Ω ( − ∆ τ j + (cid:15)τ j ) · ( − ∆ ψ + (cid:15)ψ ) dx = (cid:90) Ω j φ j · ( − ∆ ψ + (cid:15)ψ ) dx ∀ ψ ∈ V , (30)where we have invoked the fact that φ j has the local support Ω j , see (22).Note that, with (cid:15) = 0, (30) roughly becomes the standard discrete, using a The function φ j , see the right-hand-side of the PDE in (31), must be sufficientlydifferentiable and supp( φ j ) ⊂ Ω j in order for (30) to be the weak version of (31) (when (cid:15) =0). The basis functions defined in (22) do not satisfy the necessary regularity conditionsbecause they are discontinuous. V , weak version of the inhomoge-neous biharmonic equation with homogeneous boundary conditions:∆ τ j = ∆ φ j in Ω ,τ j = ∂τ j ∂ n = 0 on ∂ Ω . (31)The solution of a second or fourth order elliptic PDE depends signifi-cantly on the size and shape of the involved domain Ω. Hence, (29) and (30)show that the aforementioned decaying properties of P φ j and P N φ j dependon Ω and the position of the true source relative to the boundary ∂ Ω. Hence,the ”sharpness” of the reconstruction/recovery W − x ∗ j of φ j , as well as thepossibility of identifying several sources with Method I, will depend on thegeometrical properties of Ω – each domain must be studied separately. Thisissue is explored in more detail in the numerical experiments section below. If we apply weighted Tikhonov regularization to (3), we obtain the regular-ized solutions z α = arg min z (cid:26) (cid:107) K h z − b (cid:107) Y + 12 α (cid:107) W z (cid:107) X (cid:27) , (32)where, in this paper, the regularization operator W is defined in (9). In [5]we also, in addition to Method I described above, introduced the followingtwo schemes for identifying sources: Method II
Defining y = W z , we obtain from (32), y α = arg min y (cid:26) (cid:107) K h W − y − b (cid:107) Y + 12 α (cid:107) y (cid:107) X (cid:27) , (33)which is Method II. Theorem 4.3 in [5] expresses that the minimumnorm least squares solution y ∗ j of K h W − y = K h φ j , satisfies (cid:13)(cid:13)(cid:13)(cid:13) φ j − y ∗ j (cid:107) P φ j (cid:107) X (cid:13)(cid:13)(cid:13)(cid:13) X ≤ (cid:107) φ j − W − x ∗ j (cid:107) X , (34)12here x ∗ j is the minimum norm least squares solution of (7). Recallthat y ∗ j = lim α → y j,α , where y j,α = arg min y (cid:26) (cid:107) K h W − y − K h φ j (cid:107) Y + 12 α (cid:107) y (cid:107) X (cid:27) . Consequently, (34) shows that, for small α >
0, a scaled version ofMethod II can yield more accurate recoveries than Method I of theindividual basis functions from their images under K h . (Method I isdefined in (12).) Method III
This method is defined by (32), i.e., the outcome of thescheme is z α . Note that there is a simple connection between methodsII and III: z α = W − y α . (35)Hence, a result similar to (34), though not that strong, also holds forMethod III, see [5] for further details. Let us briefly comment on Method II’s ability to localize several sources.Similar to (15) we consider K h W − y = K h ( φ j + φ j + · · · + φ j r ) , which has the minimum norm least squares solution y ∗ = ( K h W − ) † K h φ j + ( K h W − ) † K h φ j + · · · + ( K h W − ) † K h φ j r = y ∗ j + y ∗ j + · · · + y ∗ j r , (36)where y ∗ j s , for s = 1 , , . . . , r , represents the minimum norm least squaressolution of K h W − y = K h φ j s . Since y ∗ j s , for s = 1 , , . . . , r , satisfies an inequality in the form (34), weconclude that y ∗ is a sum of vectors which can yield better recoveries of theindividual basis function from their images under K h than Method I. Wetherefore expect that Method II can separate several local sources wheneverMethod I can do it.Invoking (35) leads to a similar type of argument for Method III’s abilityto identify two, or more, sources. We omit the details.13 Numerical experiments
We avoided inverse crimes by generating the synthetic observation data d in(1) using a finer grid for the state u than was employed for the computationsof the inverse solutions: h forward = 0 . · h inverse , where h forward and h inverse are the grid parameters associated with the meshes used to produce d andthe inverse solutions, respectively. More specifically, on the unit squarewe employed a 64 ×
64 mesh for the forward computations and a 32 × u by solving (1)-(2). Except for the resultspresented in Example 4, a coarser mesh, 16 ×
16, was applied for the unknownsource f in the numerical solution of (1)-(2).The triangulations of the non-square geometries were obtained by ”re-moving” grid cells from the triangulations of the associated square domains.We used the Fenics software to generate the meshes and the matrices, andthe optimization problem (1)-(2) was solved with Matlab in terms of theassociated optimality system. In all the simulations (cid:15) = 1, and no noise wasadded to the observation data d , see (1)-(2). (Some simulations with noisyobservation data are presented in [5].) Example 1: L-shaped versus square geometry
Figure 1 displays the numerical results obtained by solving (1)-(2) for anL-shaped geometry and a square-shaped geometry, respectively. The truesources, shown in panels (a) and (b), are located at the same positions forboth geometries.Method I fails to separate the two sources on the square domain, butworks well for the L-shaped case. On the other hand, methods II and IIIhandle both geometries adequately, noticing that the separation is more pro-nounced for the L-formed domain. This is consistent with the mathematicalresult (34), which expresses that a scaled version of Method II yields more L -accurate approximations than Method I. These results illuminate the im-pact of the geometry on the inverse source problem and suggest that convexdomains lead to harder identification tasks than non-convex regions.In these simulations the two true local sources did not equal a singlebasis function φ j , but was instead defined as a sum of four basis functionswith neighbouring supports. Hence, for each of the two local sources, theconsiderations presented in subsection 2.1.3 are relevant.14 xample 2: Square versus horseshoe Figure 2 shows computations performed with a horseshoe-shaped domainand a square region. In these simulations each of the two true sources con-sisted of a single basis function. Hence, Corollary 2.1.1 is directly applicable.Again we observe that the source identification works better for a non-convexdomain than for a convex region.We also performed computations with partial boundary observations d ,see Figure 3: boundary observation data was only available for the part ofthe boundary marked with red in panel (a). We observe that this reducesthe quality of the reconstruction of the true sources, compare figures 2 and3, and that Method I works somewhat better than methods II and III inthis case. Example 3: Rectangles, distance to the boundary
So far we have compared convex and non-convex regions. We now illuminatehow the distance from the source(s) to the boundary of the domain influencethe quality of the recovery, see Figure 4: The identification of the threesources improves as the distance to the boundary decreases. Also, methodsII and III yield better results than Method I.In this example each of the true local sources are composed of severalbasis functions with neighbouring supports, cf. subsection 2.1.3 for furtherdetails.
Example 4: A smooth local source
In examples 1-3 we considered true sources which are piecewise constant.Figure 5 shows results obtained with the true smooth source f = e − x − . − x − . . Methods I and III handle this case very well, but the outcome of MethodII is not very good. The outcome of Method I is as one could anticipatefrom the discussion presented in subsection 2.1.3, but we do not have agood understanding of the rather poor performance of Method II for thisparticular problem. 15 xample 5: Identifying local constant sources with a knownmagnitude
If the magnitude of the local sources is known, we only need to recover thesize and positions of the sources. We will now briefly explain how MethodI can be used to handle such cases. Recall Corollary 2.1.1, which expressesthat Method I in many cases can detect the index, and thereby the posi-tion, of the individual local sources. This leads to the following three-stageoptimization procedure:1. Apply Method I, i.e., compute W − x α , where x α is the outcome ofemploying standard Tikhonov regularization (4), and W is defined in(9).2. Retrieve the positions p , p , ..., p m of all the local maximums of W − x α .3. Use p , p , ..., p m as centers of simple geometrical objects, e.g., circles,and solve the optimization problemmin r ,r ...,r m ∈ R + (cid:107) u − d (cid:107) L ( ∂ Ω) subject to − ∆ u + (cid:15)u = c m (cid:88) i =1 X B ri ( p i ) in Ω ,∂u∂ n = 0 on ∂ Ω , where B r i ( p i ) = { x ∈ Ω : (cid:107) x − p i (cid:107) < r i } , and c is the known magnitudeof the source(s).Panel (c) in Figure 6 shows that this procedure can work very well: Eventhough Method I almost fails to detect the small local source in the lowerright corner of the L-shaped domain, see panel (b), the radii optimizationapproach handles the case very well. Discussion
In some cases methods II and/or III work better than Method I, see figures1, 2 and 4. This is in contrast to the results presented in figures 3 and 5 forwhich Method I provides the best source identification. Hence, we can notadvice that only one of the algorithms should be used. Method I should be16pplied to get a rough picture of the location of the sources, since this schemecan localize the position of the maximum of single sources. The outputs ofmethods II and III may yield less ”smeared out/blurred” images of the truesources, but should only be trusted if their localization is consistent withthe results obtained with Method I.Our mathematical analysis shows that the ability to identify internallocal sources from Dirichlet boundary data highly depends on the geometryof the domain and the position of the true sources relative to the boundaryof this domain, see the analysis leading to (29) and (30). The numericalexperiments exemplify this, and, in particular, source identification for non-convex domains can lead to better recovery than computations performedwith convex domains of approximately the same size.
A Poisson’s equation
In many applications (cid:15) = 0, and the PDE in (2) becomes Poisson’s equation.Then the boundary value problem (2), for a given f , does not have a uniquesolution, and f must satisfy the complementary condition (cid:90) Ω f dx = 0 . (37)Note that the basis functions (22) do not satisfy this condition. In fact, itmay be difficult to construct convenient L -orthonormal basis functions withlocal supports which obey (37). To handle this matter, one can ”replace”the right-hand-side f in the state equation with f − | Ω | − (cid:82) Ω f dx :min ( f,u ) ∈ F h × H (Ω) (cid:26) (cid:107) u − d (cid:107) L ( ∂ Ω) + 12 α (cid:107) W f (cid:107) L (Ω) (cid:27) (38)subject to − ∆ u = f − | Ω | − (cid:90) Ω f dx in Ω ,∂u∂ n = 0 on ∂ Ω . (39)Note that (39) is meaningful for any f ∈ L (Ω), and it follows that we canuse basis functions in the form (22) to discretize the control.Let us also make a few remarks about the forward operator associatedwith (38)-(39). Assume that ( f ∗ , u ∗ ) solves (38)-(39). Note that, if u ∗ solves1739), so does u ∗ + c for any constant c . Consider the function g ( c ) = 12 (cid:107) u ∗ + c − d (cid:107) L ( ∂ Ω) + 12 α (cid:107) W f (cid:107) L (Ω) , c ∈ R . The optimality condition g (cid:48) (0) = 0yields that (cid:90) ∂ Ω u ∗ dS = (cid:90) ∂ Ω d dS = 0 , provided that the data d , which typically is a measured potential, has beennormalized such that (cid:90) ∂ Ω d dS = 0 . Consequently, the forward operator associated with (38)-(39) is K h : F h → L ( ∂ Ω) , f (cid:55)→ u | ∂ Ω , where u , for a given f , denotes the solution of the boundary value problem(39) which satisfies (cid:90) ∂ Ω u dS = 0 . We also note that, in this case any constant function f ( x ) = C , for all x ∈ Ω, belongs to the nullspace of K h . Consequently, the minimum normleast squares solution of K h f = d will have zero integral. References [1] B. Abdelaziz, A. El Badia, and A. El Hajj. Direct algorithms for solvingsome inverse source problems in 2D elliptic equations.
Inverse Problems ,31(10):105002, 2015.[2] A. Ben Abda, F. Ben Hassen, J. Leblond, and M. Mahjoub. Sourcesrecovery from boundary data: A model related to electroencephalogra-phy.
Mathematical and Computer Modelling , 49:2213–2223, 2009.[3] X. Cheng, R. Gong, and W. Han. A new Kohn-Vogelius type for-mulation for inverse source problems.
Inverse Problems and Imaging ,9(4):1051–1067, 2015.[4] A. El Badia and T. Ha-Duong. An inverse source problem in potentialanalysis.
Inverse Problems , 16:651–663, 2000.185] O. L. Elvetun and B. F. Nielsen. A regularization operator forsource identification for elliptic PDEs.
Accepted for publication in In-verse Problems and Imaging. Also available as arXiv e-prints , pagearXiv:2005.09444, May 2020.[6] M. Hanke and W. Rundell. On rational approximation methods forinverse source problems.
Inverse Problems and Imaging , 5(1):185–202,2011.[7] F. Hettlich and W. Rundell. Iterative methods for the reconstructionof an inverse potential problem.
Inverse Problems , 12:251–266, 1996.[8] M. Hinze, B. Hofmann, and T. N. T. Quyen. A regularization approachfor an inverse source problem in elliptic systems from single Cauchydata.
Numerical Functional Analysis and Optimization , 40(9):1080–1112, 2019.[9] V. Isakov.
Inverse Problems for Partial Differential Equations .Springer-Verlag, 2005.[10] K. Kunisch and X. Pan. Estimation of interfaces from boundary mea-surements.
SIAM J. Control Optim. , 32(6):1643–1674, 1994.[11] W. Ring. Identification of a core from boundary data.
SIAM Journalon Applied Mathematics , 55(3):677–706, 1995.[12] S. J. Song and J. G. Huang. Solving an inverse problem from biolu-minescence tomography by minimizing an energy-like functional.
J.Comput. Anal. Appl. , 14:544–558, 2012.[13] X. Wang, Y. Guo, D. Zhang, and H. Liu. Fourier method for recoveringacoustic sources from multi-frequency far-field data.
Inverse Problems ,33(3), 2017.[14] D. Zhang, Y. Guo, J. Li, and H. Liu. Retrieval of acoustic sources frommulti-frequency phaseless data.
Inverse Problems , 34(9), 2018.[15] D. Zhang, Y. Guo, J. Li, and H. Liu. Locating multiple multipolaracoustic sources using the direct sampling method.
Communications inComputational Physics , 25(5):1328–1356, 2019.19 a) True source (b) True source(c) Method I (d) Method I(e) Method II (f) Method II(g) Method III (h) Method III
Figure 1: Example 1. Comparison of the true sources and the inverse so-lutions for an L-shaped and square-shaped geometry. The regularizationparameter was α = 10 − . 20 a) True source (b) True source(c) Method I (d) Method I(e) Method II (f) Method II(g) Method III (h) Method III Figure 2: Example 2. Comparison of the true sources and the inverse so-lutions for a horseshoe-shaped and a square-shaped geometry. The regular-ization parameter was α = 10 − . 21 a) True source. (b) Method I.(c) Method II. (d) Method III. Figure 3: Example 2 with partial boundary observation observations: d in(1) is only defined along the red line segments in panel (a) (and the boundaryintegral in (1) is adjusted accordingly). Comparison of the true sources andthe inverse solutions. The regularization parameter was α = 10 − .22 a) True source. (b) True source. (c) True source.(d) Method I. (e) Method I. (f) Method I.(g) Method II. (h) Method II. (i) Method II.(j) Method III. (k) Method III. (l) Method III. Figure 4: Example 3. Comparison of the true sources and the inverse so-lutions for three different rectangles. The regularization parameter was α = 10 − and Ω = (0 , × (0 , γ ), where γ = 1 , . . a) True source. (b) Method I.(c) Method II. (d) Method III. Figure 5: Example 4. Comparison of the true smooth source and the inversesolutions. The regularization parameter was α = 10 − .24 a) True source.(b) Method I.(c) Radii optimization. Figure 6: Results obtained with the three-stage algorithm described in Ex-ample 5. Panel (a) depicts the true sources, and panel (b) shows the inversesolution computed with Method I, where the regularization parameter was α = 10 −6