A novel reconstruction technique for two-dimensional Bragg scatter imaging
AA novel reconstruction technique for two-dimensionalBragg scatter imaging J AMES
W. W
EBBER , † E RIC
L. M
ILLER ‡ The Department of Electrical and Computer Engineering, Tufts University, 161 College Ave, Medford, MA 02155, USA † [email protected] ‡ [email protected] Abstract:
Here we introduce a new reconstruction technique for two-dimensional Bragg Scattering Tomography (BST),based on the Radon transform models of [arXiv preprint, arXiv:2004.10961 (2020)]. Our method uses a combinationof ideas from multibang control and microlocal analysis to construct an objective function which can regularize theBST artifacts; specifically the boundary artifacts due to sharp cutoff in sinogram space (as observed in [arXiv preprint,arXiv:2007.00208 (2020)]), and artifacts arising from approximations made in constructing the model used for inversion.We then test our algorithm in a variety of Monte Carlo (MC) simulated examples of practical interest in airport baggagescreening and threat detection. The data used in our studies is generated with a novel Monte-Carlo code presented here.The model, which is available from the authors upon request, captures both the Bragg scatter effects described by BSTas well as beam attenuation and Compton scatter. © 2021 Optical Society of America
1. Introduction
In this paper we present a novel reconstruction technique for two-dimensional BST, based on the generalized Radontransform models of [1]. Our method employs a combination of ideas from multibang control and microlocal analysis toderive new regularization penalties, which prove to be effective in combatting the high level of noise and systematicerror in BST data (e.g. error due to beam attenuation and Compton scatter, the two primary physical mechanisms notaccounted for under the BST model). The BST model arises from a scanning geometry (first introduced in [1]) whichuses translating sources to inspect what is well approximated as a line in image space. See figure 1. A more detaileddescription of the sensing geometry is given later in section 2.1. Here BST refers to the imaging of the Bragg differentialcross section function (denoted by 𝑓 in this paper) from Bragg scatter data, and is not exclusive to the sensing geometryof [1].The literature considers a variety of reconstruction techniques and experimental methods in BST [2–7]. In [3] theauthors consider snap shot, pencil beam, coded aperture imaging of crystalline powders, such as NaCl and Al. Thetechnique, referred to as Coded Aperture X-ray Scatter Imaging (CAXSI), uses a pencil beam source to illuminate asmall sample of crystalline powder. The scattered rays are then passed through a coded aperture mesh, and the resultingintensity is recorded by a linear (1-D) array of energy-resolved detectors. The use of coded aperture offers informationabout the scattering directions of the incoming photons, and thus improves the problem stability. Mathematically, thecoded aperture modelling is represented by a kernel weighting ( 𝑡 in [3, equation (3)]). The physical modeling then leadsthe authors to a linear inverse problem to solve for the Bragg differential cross section function 𝑓 . To obtain a solutionthe authors apply Total Variation (TV) regularization and minimize the Poisson log-likelihood function. An iterative,Generalized Expectation-Maximization (GEM) algorithm is then implemented to minimize the objective, with goodresults on experimental data.In [4] CAXSI is considered, using an experimental setup much like that of [3] with a planar (2-D) array of detectors(the detectors are not energy-resolved). A Generalized Maximum Likelihood (GML) estimator is then applied toestimate 𝑓 . The GML algorithm is a multiplicative, iterative update intended to match the BST model to the datawith a mean background and Poisson statistics applied. GML is similar to GEM, except GEM applies an additionalmaximization step [3, equation (10)] after each iteration, which amounts to Poisson denoising with TV.In [2, 5–7] a number of CAXSI variations are considered, for example, using fan beam sources in [2, 5]. In theseworks, a set of “reference" differential cross section curves is used in the reconstruction algorithm, whereby each pixelis assigned to a material from the reference library based on the normalized correlation between the reconstructed andreference cross section values at the pixel. In this paper we do not assume knowledge of a cross section reference library,1 a r X i v : . [ m a t h . NA ] J a n 𝑥 = 𝑤 𝑥 𝑥 𝑥 𝛽𝜔 x 𝑥 = − 𝑤 𝑥 𝑓 x d detector plane s 𝑥 = − 𝑤 𝑥 𝑥 = 𝑤 𝑥 𝜔 (a) ( 𝑥 , 𝑥 ) (source fan-beam) plane cross-section. The source ( s ) opening angle is 𝛽 and we have shown two scattering locations at x , x ∈ 𝐿 withscattering angle 𝜔 . The scanning tunnel [− 𝑤 𝑥 , 𝑤 𝑥 ] × [− 𝑤 𝑥 , 𝑤 𝑥 ] is shown as a red rectangle. − 𝑥 𝑥 s d 𝜖𝑓 collimation plane ( 𝑥 , 𝑥 ) plane detector plane2 𝑤 𝑥 − 𝑥 𝐿 (b) ( 𝑥 , 𝑥 ) plane cross-section. Note that 𝐿 is now orthogonal to the page (parallel to 𝑥 ). Fig. 1. The X-ray scanner geometry. The scanned object is labelled as 𝑓 , withsupp ( 𝑓 ) ⊂ [− 𝑤 𝑥 , 𝑤 𝑥 ] × [− 𝑤 𝑥 , 𝑤 𝑥 ] (the red rectangle in figure 1a). Thedetectors are collimated to planes, and the scattering events occur along lines 𝐿 = { 𝑥 = 𝑥 (cid:48) , 𝑥 = } , for some − 𝑤 𝑥 < 𝑥 (cid:48) < 𝑤 𝑥 . The scatter from 𝐿 is measured by detectors d ∈ { 𝑥 = 𝑥 = 𝜖 } , for some 𝜖 > and wish to keep the material properties general. We assume only that the form factor curves are 𝐿 functions.We introduce a new regularization scheme for small sample ( < < 𝑥 in the ( 𝑥 , 𝑥 ) plane.The reconstruction target 𝑓 (illustrated as a sphere in figure 1) is three-dimensional 𝑓 = 𝑓 ( 𝑞 , 𝑥 , 𝑥 ) , where 𝑞 denotesthe momentum transfer of the scattering interaction (see equation (2.7)). As is done in [1], the recovery of 𝑓 is obtainedslice by slice on planes orthogonal to the 𝑥 direction. We consider only 2-D reconstruction here (in ( 𝑞 , 𝑥 ) space), for avariety of 𝑥 values, in our simulations, and leave the piecing together of the 2-D slices (to form a full 3-D image) tofuture work. A major part of the regularization idea we propose is to assume that 𝑓 is separable as a function of 𝑞 and ( 𝑥 , 𝑥 ) (in the spatial domain). We model the spatial component of 𝑓 using an overcomplete piecewise constantdictionary. This is a standard idea in CS [8–10], although typically in CS the library (basis) functions (or atoms [10]) arechosen to span the whole imaging domain. The basis functions we use only cover the ( 𝑥 , 𝑥 ) domain, with 𝑓 havingmore general 𝐿 properties in the 𝑞 domain. The assumptions made here regarding the piecewise constant model for 𝑓 discussed above are consistent with the BST literature [2–7], and what is expected in practice. That is, we expect 𝑓 to beexpressible as a finite sum of characteristics in the spatial domain. For example, 𝑓 could be a block of explosives inairport baggage or a sample of narcotic powder (e.g. fentanyl) in mail at customs.Another major component of our regularization penalty is the use of ideas from multibang control [11–13]. Themultibang penalty [11] is used to enforce solutions whereby the function outputs are constrained to a finite set. We aimto apply multibang ideas here to enforce the piecewise constant structure of 𝑓 in ( 𝑥 , 𝑥 ) space, as discussed in the lastparagraph. We do this by defining a set of binary switches 𝑎 𝑗 to either activate or deactivate a characteristic functionfrom our library. The multibang penalties are applied to enforce binary solutions for the 𝑎 𝑗 . So the finite set of solutionsfor the fitted 𝑎 𝑗 , in our case, is {
0, 1 } . As our proposed objective function has smoothly defined gradients, we seek arelaxed alternative to the multibang penalty (the multibang penalty of [11] is not smooth), which we introduce later insection 2.3.In addition to CS and multibang techniques, we also employ filtering ideas from microlocal analysis [14, 15], tosuppress the boundary artifacts typically observed in BST reconstruction, e.g., as observed in [16] in reconstructionsfrom Bragg integral data. The filtering techniques from the literature are shown in section 2.5 to offer significantlyimproved image quality and artifact reduction in reconstructions from Monte Carlo data.The remainder of the this paper is organized as follows. In section 2 we explain our methodology. This includesa review of the physical model and Bragg transform from [1] in section 2.2, before moving on to explain our newreconstruction method in section 2.3. The reconstruction technique we propose is formalized as an algorithm in section2.4, and the pre-processing using microlocal filters is explained in section 2.5. In section 3 we present our results ona wide variety of Monte Carlo and analytic simulations of interest in threat detection, and give comparison to a TVregularized solution.
2. Methodology
In this section we introduce our reconstruction technique and explain the filtering techniques used as data pre-processingfrom microlocal analysis. First we review the sensing geometry and Bragg transform of [1], and introduce some notation.
The scanner of figure 1 is equipped with linear detector collimation technology, which we will refer to as “Venetianblind" type collimation. The scanned object 𝑓 travels through the scanning tunnel (the red rectangle in figure 1a) in the 𝑥 direction on a conveyor belt, and is illuminated by a line of X-ray sources, located opposite a plane of detectors.The scanner sources (with coordinate s ) are fixed and switched along { 𝑥 = − 𝑤 𝑥 , 𝑥 = } , and are assumed to bepolychromatic 2-D fan-beam (in the ( 𝑥 , 𝑥 ) plane) with opening angle 𝛽 . The detectors (with coordinate d ) are assumedto be energy-resolved and lie on the { 𝑥 = 𝑤 𝑥 } plane, with small (relative to the scanning tunnel size) offset 𝜖 in the 𝑥 direction. The detectors are collimated to record photons which scatter on planes in R , and the planes of collimationare orientated to intersect the source ( 𝑥 , 𝑥 ) plane along horizontal lines (parallel to 𝑥 ). Hence the photon arrivalsmeasured by the scanner detectors are scattered from horizontal lines embedded in the ( 𝑥 , 𝑥 ) plane. An example 𝜖 isshown in figure 1b, which maps to the line 𝐿 = { 𝑥 = 𝑥 = } at the half way point.3 .2. The Bragg transform Let I = [− 𝑤 𝑥 , 𝑤 𝑥 ] , let 𝔈 = [ 𝐸 𝑚 , 𝐸 𝑀 ] be the energy range, and let Φ : 𝑥 → 𝜖 be a diffeomorphic mapfrom the scanned line profile 𝑥 to the detector array position 𝜖 . Let x = ( 𝑥 , 𝑥 ) . Then the Bragg transform 𝔅 𝑎 : 𝐿 ( 𝔈 × [− 𝑤 𝑥 , 𝑤 𝑥 ] × I) → 𝐿 (( ∞) × [− 𝑤 𝑥 , 𝑤 𝑥 ] × Φ (I)) defines a mapping from the target 𝑓 to the Braggscatter measured by the scanner of figure 1 [1, page 7] 𝔅 𝑎 𝑓 ( 𝐸 , 𝑠 , 𝑑 , Φ ( 𝑥 )) = ∫ R 𝜒 [− 𝑤 , 𝑤 ] ( 𝑥 − 𝑠 ) 𝐼 ( 𝐸 , x ) 𝑃 ( 𝜃 ( d , s , x )) d Ω x , d × 𝑓 (cid:18) 𝐸 sin 𝜃 ( d , s , x ) ℎ𝑐 , x (cid:19) d 𝑥 , (2.1)where s = ( 𝑠 , − 𝑤 𝑥 , 0 ) , d = ( 𝑑 , 𝑤 𝑥 , Φ ( 𝑥 )) , 𝜒 𝑆 denotes the characteristic function on a set 𝑆 and 𝑓 ( 𝑞 , x ) = 𝑛 𝑐 ( x ) 𝐹 ( 𝑞 , x ) is the number of cells per unit volume ( 𝑛 𝑐 ) multiplied by the Bragg differential cross section ( 𝐹 ). Here ℎ is Planck’s constant and 𝑐 is the speed of light in a vacuum.We consider the recovery of the 2-D functions 𝑓 (· , · , 𝑥 ) from 𝔅 𝑎 𝑓 (· , · , · , Φ ( 𝑥 )) , for each 𝑥 ∈ I . That is we considerthe slice-by-slice reconstruction of 𝑓 from the 4-D data 𝔅 𝑎 𝑓 . We focus exclusively on 2-D reconstruction here. This isto say that we do not consider the piecing together of the 2-D slices to form a full 3-D image. This is left to future work.With this in mind we adopt the short-hand notation 𝑓 ( 𝑞 , 𝑥 ) = 𝑓 (· , · , 𝑥 ) , for some fixed 𝑥 ∈ I .The remaining terms are defined as follows. The source width 𝑤 is determined by the source opening angle 𝛽 (seefigure 1a) 𝑤 ( 𝑥 ) = ( + 𝑥 ) tan 𝛽 Ω x , d = 𝐷 𝐴 × (( x , 0 ) − d ) · ( −
1, 0 ) 𝑇 |( x , 0 ) − d | , (2.3)where 𝐷 𝐴 is the detector area, and the Bragg angle ( 𝜃 = 𝜔 ) is determined bycos 2 𝜃 ( d , s , x ) = (( x , 0 ) − s ) · ( d − ( x , 0 ))|(( x , 0 ) − s )||( d − ( x , 0 ))| . (2.4)The polarisation factor 𝑃 ( 𝜃 ) is given by 𝑃 ( 𝜃 ) = + cos 𝜃 𝐼 ( 𝐸 , x ) = 𝐼 ( 𝐸 )| s − x | = 𝐼 ( 𝐸 ) 𝑥 + ( 𝑥 + ) , (2.6)where 𝐼 > 𝑞 = 𝐸ℎ𝑐 sin 𝜃 , (2.7)where 𝐸 is given in units of kilo-electron-volts (keV) and 𝑞 is given in units of inverse Angstroms ( − ). So ℎ𝑐 is theconversion factor from − to keV. Equation (2.7) is derived from the Bragg equation [17] ℎ𝑐𝐸 = 𝜆 = 𝑑 𝐻 sin 𝜃 , (2.8)where 𝜆 is the photon wavelength, and 𝑑 𝐻 is the spacing between the reflection planes within the crystal. For example,for cubic structures 𝑑 𝐻 = 𝑎 √ ℎ + 𝑘 + 𝑙 , (2.9)where 𝐻 = ( ℎ , 𝑘 , 𝑙 ) is the Miller index and 𝑎 is the uniform lattice spacing of the crystal.The operator 𝔅 𝑎 is the same as considered in [1, equation (3.13)], but with the attenuation terms 𝐴 and 𝐴 removedfrom the modelling (using the notation of [1]). We do this as later in our simulations we assume no prior knowledge ofthe attenuation map. Further, as discussed in [1], the neglection of attenuation effects from the modelling is needed toprove linear invertiblity. 4
300 -200 -100 0 100 200 300 x q -300 -200 -100 0 100 200 300 x q -300 -200 -100 0 100 200 300 x q -300 -200 -100 0 100 200 300 x q -300 -200 -100 0 100 200 300 x q -300 -200 -100 0 100 200 300 x q Fig. 2. Weighted Bragg curve examples for varying 𝐸 , 𝑠 and 𝑑 . 𝑥 = 𝐴 into 2-D images. The curves on the toprow are chosen so that 𝑠 = 𝑑 as is considered in [1]. We see that the curves on thetop row have the same shape as those shown in [1, Figure 7]. Throughout this paper, 𝑓 ( 𝑞 , 𝑥 ) (when discretized) will be represented as an 𝑛 × 𝑚 image, with 𝑛 the number of 𝑞 samples, and 𝑚 the number of 𝑥 samples. Let us fix 𝑥 ∈ I , and let 𝐴 ∈ R 𝑝 ×( 𝑚𝑛 ) denote the discretized form of thelinear operator 𝔅 𝑎 . See figure 2 for an illustration of a discretized Bragg operator in ( 𝑞 , 𝑥 ) space.The regularization method we propose is derived from a set of a-priori assumptions regarding the target function 𝑓 .We assume that 𝑓 is of the form 𝑓 ( 𝑞 , 𝑥 ) = 𝑙 ∑︁ 𝑗 = 𝑎 𝑗 𝑓 𝑗 ( 𝑞 ) 𝜒 𝑗 ( 𝑥 ) , (2.10)where 𝑓 𝑗 ∈ 𝐿 ( 𝔈 ) , and 𝜒 𝑗 = 𝜒 𝐼 𝑗 with 𝐼 𝑗 = [− 𝑤 𝑗 + 𝑥 𝑐𝑗 , 𝑤 𝑗 + 𝑥 𝑐𝑗 ] an interval with width 𝑤 𝑗 and center 𝑥 𝑐𝑗 . The 𝑓 𝑗 havethe form of a delta-comb (see [1, equation (3.10)]). See figure 5 for some example 𝑓 𝑗 curves. As discussed in theintroduction, the form 2.10 for 𝑓 is consistent with the BST literature and what is expected in practice, and thus it isreasonable to assume that 𝑓 can be expressed by the expansion 2.10.We introduce a library (finite set) of characteristic functions 𝜒 , . . . , 𝜒 𝑙 from which 𝑓 can be formed. The 𝑎 𝑗 ∈ {
0, 1 } act as binary switches to either activate or deactivate characteristic 𝑗 from the library. The 𝜒 𝑗 library is chosen tocomprehensively cover the support of 𝑓 in 𝑥 , but to also be restrictive enough to sufficiently regularize the solution. Thatis, we choose the lowest cardinality 𝑥 𝑐𝑗 , 𝑤 𝑗 set such that the characteristic centers and widths of interest are accuratelyrepresented. The 𝑥 𝑐𝑗 , 𝑤 𝑗 set used later in section 3 in our simulations is shown to be sufficient to cover sufficientlythe support of 𝑓 in 𝑥 , with good results, for the most part, on phantoms which are comprised of characteristics lyingoutside of the chosen library.Let z 𝑗 denote the vectorized form of 𝜒 𝑗 and let 𝐶 𝑗 = (cid:0) z 𝑗 𝐼 𝑛 , . . . , z 𝑗𝑚 𝐼 𝑛 (cid:1) 𝑇 , where z 𝑗𝑖 is the 𝑖 th entry of z 𝑗 and 𝐼 𝑛 isthe 𝑛 × 𝑛 identity matrix. Then we define 𝐴 𝑗 = 𝐴𝐶 𝑗 as the restriction of 𝐴 to characteristic 𝜒 𝑗 . Let Z = ( z , . . . , z 𝑙 ) bethe matrix with z 𝑗 as columns. We define the Gram matrix 𝐺 = Z 𝑇 Z = ( z , . . . , z 𝑙 ) 𝑇 ( z , . . . , z 𝑙 ) . (2.11)Then 𝐺 is such that 𝐺 𝑖 𝑗 ≠ 𝜒 𝑖 and 𝜒 𝑗 intersect and 𝐺 𝑖 𝑗 = C( a , 𝑌 ) = 𝑝 ∑︁ 𝑘 = (cid:34) (cid:169)(cid:173)(cid:171) 𝑙 ∑︁ 𝑗 = 𝑎 𝑗 𝐴 𝑗 y 𝑗 (cid:170)(cid:174)(cid:172) 𝑘 − 𝑏 𝑘 log (cid:169)(cid:173)(cid:171) 𝑙 ∑︁ 𝑗 = 𝑎 𝑗 𝐴 𝑗 y 𝑗 (cid:170)(cid:174)(cid:172) 𝑘 (cid:35) + 𝜆 𝑙 ∑︁ 𝑗 = (cid:107) y 𝑗 (cid:107) + 𝛼 𝑙 ∑︁ 𝑗 = 𝑎 𝑗 ( − 𝑎 𝑗 ) + 𝛾 ∑︁ 𝑖< 𝑗 𝐺 𝑖 𝑗 𝑎 𝑖 𝑎 𝑗 , (2.12)where a = ( 𝑎 , . . . , 𝑎 𝑙 ) , 𝑎 𝑗 ∈ [
0, 1 ] , 𝑌 = ( y , . . . , y 𝑙 ) ∈ R 𝑛 × 𝑙 + and y 𝑗 is the discretized form of 𝑓 𝑗 . The negative Poissonlog-likelihood function in the first term of (2.12) is included since we expect the photon arrivals to follow a Poisson noisemodel, as is, for example, used in the BST literature [2–7], and also in Positron Emission Tomography (PET) [18, page5]. Here the notation ( b ) 𝑘 = 𝑏 𝑘 of (2.12) denotes the 𝑘 th entry of b . The penalty termMB ( a ) = 𝑙 ∑︁ 𝑗 = 𝑎 𝑗 ( − 𝑎 𝑗 ) (2.13)is included to enforce binary solutions for 𝑎 . The idea follows a similar intuition to that of multibang control [11–13],where the authors seek solutions with a finite set of values. The multibang penalty [11] is defined (using the notationof [11]) as MB ( 𝑢 ) = ∫ Ω 𝑑 (cid:214) 𝑗 = | 𝑢 ( 𝑥 ) − 𝑢 𝑗 | , (2.14)where Ω ∈ R 𝑛 is the region of interest, { 𝑢 , . . . , 𝑢 𝑑 } is the finite set of solutions considered, and | 𝑢 | = 𝑢 = | 𝑢 | = { 𝑢 , 𝑢 } = {
0, 1 } and 𝑑 =
2. The objectives of [11–13] are then minimized using asemismooth Newton approach, as proposed in [11]. As the sum of the remaining terms of (2.12) (i.e. with 𝛼 MB ( a ) removed from the summation) has a trivially computable gradient, we seek a multibang type penalty with smoothgradients so that the solution to (2.12) can be obtained using the quasi-Newton methods of [19]. With this in mindwe introduce the “relaxed" multibang regularizer MB here, with smoothing parameter 𝛼 (increasing 𝛼 more harshlyenforces binary solutions and vice-versa). The termGM ( a ) = ∑︁ 𝑖< 𝑗 𝐺 𝑖 𝑗 (cid:16) aa 𝑇 (cid:17) 𝑖 𝑗 = ∑︁ 𝑖< 𝑗 𝐺 𝑖 𝑗 𝑎 𝑖 𝑎 𝑗 (2.15)is included as a soft constraint in (2.12) to enforce no overlap between the activated characteristics. We do this, as twoobjects cannot occupy the same space (i.e. 𝑓 is well-defined). In the simulations conducted in section 3, 𝛾 is set tosome large value (orders of magnitude greater than 𝛼 and 𝜆 ) to ensure there is no overlap in the 𝜒 𝑗 . Finally, the 𝐿 regularization penalty (with parameter 𝜆 ) is included to enforce sparsity in the 𝑞 domain, as we expect the y 𝑗 to besparse given the delta-comb structure of the 𝑓 𝑗 for crystalline material (see figure 5).To minimize C we use the L-BFGS-B method of [19] with box contraints, setting non-negativity constraints on the y 𝑗 and constraining 𝑎 𝑗 ∈ [
0, 1 ] . A solution is obtained by first fixing a , and solving (2.12) for y for a number of inneriterates 𝑛 , then fixing the y output and solving for a over 𝑛 iterates. The entire process is then repeated for 𝑛 outeriterations until convergence is reached. A more detailed explanation of the reconstruction algorithm is given below. Initialize 𝑏 , 𝛼 , 𝜆 , 𝛾 , 𝑛 and 𝑛 . Initialize a , y , and characteristic library Z = ( z , . . . , z 𝑙 ) . Define 𝐺 = Z 𝑇 Z . Thenfor 𝑛 outiter = 𝑛 do: Stage 1 : • Fix a and define A = [ 𝑎 𝐴 , . . . , 𝑎 𝑙 𝐴 𝑙 ] , F ( y ) = ∑︁ 𝑘 (A y ) 𝑘 − 𝑏 𝑘 log (A y ) 𝑘 + 𝜆 (cid:107) y (cid:107) Calculate the gradients ∇ y F . • Run 𝑛 iterations of L-BFGS-B with y , F and ∇ y F as input, constraining the solution y ∈ [ ∞) 𝑛 × 𝑙 . • Set y = y . Stage 2 : • Fix y and define Y = [ 𝐴 y , . . . , 𝐴 𝑙 y 𝑙 ] , G( a ) = ∑︁ 𝑘 (Y a ) 𝑘 − 𝑏 𝑘 log (Y a ) 𝑘 + 𝛼 𝑙 ∑︁ 𝑗 = 𝑎 𝑗 ( − 𝑎 𝑗 ) + 𝛾 ∑︁ 𝑖< 𝑗 𝐺 𝑖 𝑗 𝑎 𝑖 𝑎 𝑗 • Calculate the gradients ∇ a G . • Run 𝑛 iterations of L-BFGS-B with a , G and ∇ a G as input, constraining the solution a ∈ [
0, 1 ] 𝑙 . • Set a = a .Equivalently, initialize the input parameters, then repeat stages 1 and 2 above 𝑛 times until convergence in reached.Note that in stage 1, since the entries of y are constrained to be positive, the gradient of (cid:107) y (cid:107) is defined and smooth at y =
0, i.e., ∇ y (cid:107) y (cid:107) = ( . . . , 1 ) 𝑇 (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) 𝑛 × 𝑙 times on [ ∞) 𝑛 × 𝑙 . Here we discuss the idea to apply filtering techniques from microlocal analysis to the Bragg data, with the aim tosuppress boundary type artifacts in the reconstruction, e.g., as are discovered in [16] in reconstructions from Braggcurve integral data.In the literature on microlocal analysis in 2-D limited data X-ray CT [14, 15], the authors make use of smoothingfilters to reduce streaking artifacts which appear along straight lines at the boundary of the data set. Let 𝑅 denote theclassical Radon transform and let 𝑆 ⊂ 𝑆 × R be the subset of sinogram space for which 𝑅 𝑓 ∈ 𝐿 ( 𝑆 × R ) is known,where 𝑓 is the reconstruction target. Then the reconstruction in limited data CT is typically obtained by direct filteredbackprojection ˜ 𝑓 = 𝑅 ∗ Λ 𝜒 𝑆 𝑅 𝑓 , (2.16)where Λ is the standard ramp filter [20, chapter 3]. That is, the missing data is set to zero and then the inverse Radontransform is applied to recover ˜ 𝑓 ≈ 𝑓 . Using a multiplicative filter with such a sharp cutoff as 𝜒 𝑆 has been shown toproduce heavy streaking artifacts in the reconstruction, which appear along lines corresponding to data points at theboundary of 𝑆 × R \ 𝑆 (i.e. where 𝑅 𝑓 is not known). In [14, 15] it is proposed to suppress such artifacts by replacing 𝜒 𝑆 in (2.16) with 𝜓 ∈ 𝐶 ∞ (cid:0) 𝑆 × R (cid:1) , such that 𝜓 ( 𝑠 , 𝜃 ) = ( 𝑠 , 𝜃 ) ∈ (cid:0) 𝑆 × R (cid:1) \ 𝑆 and 𝜓 ( 𝑠 , 𝜃 ) = 𝑆 (away from the boundary), with 𝜓 smoothly tending to zero near the boundary of 𝑆 . We use a similar idea here fordata pre-processing.In our case the Bragg data 𝔅 𝑎 𝑓 ∈ 𝐿 ([ 𝐸 𝑚 , 𝐸 𝑀 ] × Ω × Φ (I)) , where Ω = [− 𝑤 𝑥 , 𝑤 𝑥 ] . Thus, the boundary of thesinogram in our case is a 3-D cube in energy, source position and detector position, for each 𝑥 considered. The fulldata consists of all ( 𝐸 , 𝑠 , 𝑑 ) ∈ R + × R . Due to finite scanner width and limitations on the energy range however, weconsider the limited data on [ 𝐸 𝑚 , 𝐸 𝑀 ] × Ω . To deal with the cutoff at the boundary of [ 𝐸 𝑚 , 𝐸 𝑀 ] × Ω we construct afiltering function 𝜓 ∈ 𝐶 ∞ (cid:0) [ 𝐸 𝑚 , 𝐸 𝑀 ] × R × Φ (I) (cid:1) which goes to zero smoothly in the energy variable 𝐸 , as 𝐸 → 𝐸 𝑀 .In the remaining variables 𝑠 and 𝑑 , we see a natural smoothing effect in the data towards zero for 𝑠 , 𝑑 near the 𝑥 limit of the portal scanner (i.e. at 𝑥 = ± 𝑤 𝑥 ) due to solid angle effects and the reduction in source intensity withincreasing 𝑥 (see (2.6)). That is, with a wide enough scanner (or large enough 𝑤 𝑥 ), the detectors with 𝑑 → ± 𝑤 𝑥 have a more restrictive view of the scatter than those 𝑑 ≈
0, and thus measure less scatter due to lower solid angle. The7
E (keV) ( E ) Fig. 3. Plot of 𝜓 , as in equation (2.17), with 𝐸 𝑀 = 𝐸 𝑚 = source projections with 𝑠 → ± 𝑤 𝑥 produce less counts due to reduced illumination of the object (i.e. at the scanneredge much of the source fan-beam does not intersect the scanning region), and decreased input intensity at furtherdistances from the source. Also, as 𝐸 → 𝐸 𝑚 , we see a natural smoothing of the signal towards zero due to increasedeffects due to attenuation and self-absorption at low 𝐸 . The self-absorbed photons are those which are absorbed bya material (due to the photoelectric effect) after being scattered initially from the same material. While the naturalsmoothing does not reach zero exactly at the boundary (e.g. at 𝑠 , 𝑑 = ± 𝑤 𝑥 ), we found the smoothing sufficientto reduce significantly the presence of boundary artifacts in the reconstruction. Therefore, we require no additionalsmoothing in the data in the 𝑠 and 𝑑 variables, and in 𝐸 as 𝐸 → 𝐸 𝑚 .With this in mind, we define 𝜓 as the third order polynomial in 𝐸𝜓 ( 𝐸 , 𝑠 , 𝑑 , 𝜖 ) = − (cid:18) 𝐸 𝑀 ( 𝐸 𝑀 − 𝐸 ) (cid:19) + (cid:18) 𝐸 𝑀 ( 𝐸 𝑀 − 𝐸 ) (cid:19) (2.17)for 𝐸 ∈ [ 𝐸 𝑀 , 𝐸 𝑀 ] , and 𝜓 ( 𝐸 ) = [ 𝐸 𝑚 , 𝐸 𝑀 ] . See figure 3 for a plot of 𝜓 with 𝐸 𝑀 = 𝐸 𝑚 = 𝐸 𝑚 < 𝐸 𝑀 , and this assumption is satisfied in the simulations conducted in section 3. We choose to smooth50% of the data here (i.e. for 𝐸 ∈ [ 𝐸 𝑀 , 𝐸 𝑀 ] ), as 50% sinogram smoothing proved effective as a cutoff in [14, 15].Finally, to obtain our solution, we discretize 𝜓 𝔅 𝑎 𝑓 into a data vector b , and use b as input to Algorithm 2.4.To show the effectiveness of this idea, and how it works, refer to figure 4. Figure 4a shows a plot of Monte Carlo data 𝔅 𝑎 𝑓 of the scatter measured by the portal scanner from NaCl and Carbon (diamond structure) spheres. In this case 𝑥 = { 𝑥 = 𝑥 = } . Figure 4c shows the same data filtered by 𝜓 . Toclarify, the plots in figures 4a and 4c are the vectorized forms of the full 3-D data cube (in ( 𝐸 , 𝑠 , 𝑑 ) ), where eachenergy bin on the plot contains the corresponding vectorized 2-D datasets in ( 𝑠 , 𝑑 ) . The 𝑠 and 𝑑 positions used arespecified later in section 3.1. We present the plots in this way purely to show the effects of the 𝜓 filtering. We can seethat the plot of figure 4c is that of figure 4a point multiplied by 𝜓 , as in figure 3.The reconstructions of 𝑓 ( 𝑞 , 𝑥 , 0 ) corresponding to figures 4a and 4care shown in figures 4b and 4d, using 𝔅 𝑎 𝑓 and 𝜓 𝔅 𝑎 𝑓 as input to Algorithm 2.4, respectively, with 𝑛 outiter = 𝑛 outiter , and explain the imaging parameters in more detail. This example is included purelyto illustrate the effectiveness of microlocal filters for our problem. In figure 4b we see significant image artifacts, whichroughly follow the shape of the Bragg integration curves of figure 2, in particular those along the top row. The curvesalong the top row of figure 2 correspond to the case when 𝐸 = 𝐸 𝑀 , that is, at the sinogram boundary in 𝐸 . Thus wesee artifacts in the reconstruction due to the sharp cutoff in sinogram at 𝐸 = 𝐸 𝑀 . When the cutoff is smoothed by 𝜓 ,using 𝜓 𝔅 𝑎 𝑓 as input to Algorithm 2.4 in figure 4d, we notice a significant reduction in the artifacts, and a markedimprovement in image quality. 8 E (keV) pho t on c oun t s (a) 𝔅 𝑎 𝑓 -300 -200 -100 0 100 200 300 x q (b) 𝔅 𝑎 𝑓 reconstruction E (keV) pho t on c oun t s (c) 𝜓 𝔅 𝑎 𝑓 -300 -200 -100 0 100 200 300 x q (d) 𝜓 𝔅 𝑎 𝑓 reconstruction Fig. 4. Microlocal filtering examples.
3. Testing and results
In this section we perform the analytic and Monte Carlo data testing of Algorithm 2.4 in a variety of imaging scenariosof interest in airport baggage screening and threat detection. First we discuss the imaging specifics (i.e. source/detectorpositions, source energies etc.) and then give simulated reconstructions from analytic and Monte Carlo data.
Let us consider the scanning geometry depicted in figure 1. Throughout the simulations conducted in this sectionwe convert to a more practical measurement system, using units of millimeters (mm), and set 𝑤 𝑥 = 𝑤 𝑥 = Φ is given explicitly by the linear map Φ ( 𝑥 ) = ( − 𝑥 ) , (3.1)with Φ (− 𝑤 𝑥 ) = Φ ( 𝑤 𝑥 ) = Φ ( ) = 𝑤 𝑥 , 𝑤 𝑥 , and Φ are based on preliminary design specifics for the scanner of figure 1. For this study we use 31 source positions 𝑠 ∈ {− + ( 𝑗 − ) : 1 ≤ 𝑗 ≤ } mm spaced at 20mm intervals, with source opening angle 𝛽 = ◦ . Thesources are polychromatic (e.g. an X-ray tube) fan-beam, and we consider 29 spectrum energies 𝐸 ∈ { . . . , 29 } keV,with energy bin width 1keV. That is, the source spectrum is modeled as the sum of 29 delta distributions with centers 𝐸 ∈ { . . . , 29 } keV. A more accurate model would employ a fully continuous spectra and take averages over each energybin. However, we leave the considerations of such effects to future work, and focus on the errors due to attenuation andCompton scatter (which are far more significant sources of error than energy bin averaging) here specifically.As discussed in section 2.3, we aim to recover 𝑓 ( 𝑞 , 𝑥 ) on the range [
0, 2 ] − × [− ] mm. The maximum9nergy 𝐸 𝑀 = 𝑞 in the reconstruction space, namely 𝑞 𝑀 =
2. That is 𝐸 𝑀 = ( ℎ𝑐 ) 𝑞 𝑀 sin (cid:0) 𝜔 𝑀 (cid:1) , (see equation (2.7))where 𝜔 𝑀 = ◦ is an upper bound on the scattering angles possible with the portal geometry of figure 1.The input distribution 𝐼 ( 𝐸 ) ∝ (cid:205) 𝑗 = 𝛿 ( 𝐸 − 𝑗 ) is chosen to be uniform in this study. This choice is well founded since,given the separability of 𝐼 ( 𝐸 , 𝑥 ) (equation (2.6)), we can divide through by 𝐼 ( 𝐸 ) > 𝑑 ∈ {− + ( 𝑗 − ) : 1 ≤ 𝑗 ≤ } mmspaced at 1mm intervals. A 1mm detector pitch is realistic given the pixel sizes of the energy sensitive detectors on themarket [21] (the detectors in that paper have a 250 𝜇 m pitch, and energy resolution 0.6keV). The 𝜖 coordinate of thedetectors varies with the chosen 𝑥 ∈ [− ] mm, and is determined by 𝜖 = Φ ( 𝑥 ) (equation 3.1). The detectors areassumed to be energy-resolved with energy resolution 1keV, so we are able to distinguish between all 29 energies in ourrange 𝐸 ∈ [
1, 29 ] keV. The number of source positions (31) is low here as we anticipate longer scanning times per sourceprojection so as to allow for sufficient photon counts. The increased number of detector positions (600) and energiesbins (29) are at no detriment to the scan time. Note that due to the delta-comb nature of 𝐼 ( 𝐸 ) , there is no overlap in theenergy bins. In total, the number of data points (or the number of rows of 𝐴 ) is 𝑝 = × × = 𝑓 as a 750 ×
600 (high resolution) image, with 𝑛 = 𝑞 samples in { 𝑞 = ( 𝑗 − ) : 1 ≤ 𝑗 ≤ } − and 𝑚 = 𝑥 samples in 𝑥 ∈ {− + ( 𝑗 − ) : 1 ≤ 𝑗 ≤ } mm. We aim reconstruct 𝑓 on the box [
0, 2 ] − × [− ] mm.The length of y (i.e. the number of reconstructed pixels) is therefore 𝑛 × 𝑚 = × = 𝐴 ).We use the characteristic library consisting of 𝑙 =
405 elements, with widths 𝑤 𝑗 ∈ [
1, 3 ] cm at 5mm intervals, andcenters 𝑥 𝑐𝑗 ∈ [− ] mm at 5mm intervals. We are assuming 𝑓 ( 𝑞 , 𝑥 ) = | 𝑥 | > 𝑓 is zero outsideof the scanning tunnel depicted in figure 1. We also assume that 𝑓 is composed of crystallite samples with width notexceeding 3cm, e.g., this could be a small sample of narcotics hidden amongst clutter in a mail parcel. As a quantitative measure of the accuracy of our results we use the edge 𝐹 score [22–24]. Let 𝑓 and ˜ 𝑓 denote groundtruth and reconstructed images respectively. We employ the code of [25] here, which detects large jumps in the gradientimages (i.e. in ∇ 𝑓 and ∇ ˜ 𝑓 ). The edge 𝐹 score is a measure of how well we have recovered ∇ 𝑓 and the locations of theBragg peaks (i.e. the large spikes in the Bragg spectra of figure 5). Let TP, FN and FP denote the proportion of truepositives, false negatives and false positives in a classification result, respectively. Then the 𝐹 score is defined 𝐹 = + FP + FN . (3.2)Equivalently, the 𝐹 score is the geometric mean of the recall and precision of a classification result. The edge 𝐹 scoreis the 𝐹 score of (3.2) calculated between the edge maps ∇ 𝑓 and ∇ ˜ 𝑓 . Specifically, we use the code of [25] (with highgradient threshold) to calculate ∇ 𝑓 and ∇ ˜ 𝑓 and convert to binary images. Then, we vectorize said binary images anduse as input to equation (3.2).Refer back to equation 2.12 and the definitions of the hyperparameters 𝛼 , 𝛾 , 𝜆 . Through experimentation we foundthat the hyperparameter values 𝛼 = 𝑒 + 𝛾 = 𝑒 +
10 (no overlap constraint) worked wellfor the examples considered here. We choose the smoothing parameter 𝜆 ( 𝐿 smoothing) for the best results in terms ofedge 𝐹 score. Let 𝑍 denote the atomic number of the material. Then we consider the 𝐹 ( 𝑞 , 𝑍 ) curves for three materials here, namelyCarbon with a graphite structure (denoted C-graphite), salt and Carbon with a diamond structure (denoted C-diamond).We choose these three materials as they fall into our 𝑍 <
20 range, and exhibit a variety of crystalline structures (e.g.hexagonal for graphite, and face-center cubic for salt), and are thus suitable for testing our algorithm. Here 𝑍 replaces x ∈ R from section 2.2 to represent the material. Furthermore, the 𝐹 ( 𝑞 , 𝑍 ) curves for C-graphite, salt and C-diamondare well known and readily available in the literature (e.g. see [26] for the crystal structure of salt). Calculating 𝐹 ( 𝑞 , 𝑍 ) 𝐹 curves.The Bragg differential cross section 𝐹 ( 𝑞 , 𝑍 ) is defined [1, 27, 28] 𝐹 ( 𝑞 , 𝑍 ) ∝ 𝑞 ∑︁ 𝐻 𝛿 (cid:18) 𝑑 𝐻 − 𝑞 (cid:19) 𝑑 𝐻 | 𝐹 𝐻 ( 𝑞 )| = 𝑔 ( 𝑞 , 𝑍 ) , (3.3)where 𝛿 is the Dirac-delta function and 𝐹 𝐻 ( 𝑞 ) = 𝑛 𝑎 ∑︁ 𝑖 = 𝐹 𝑖 ( 𝑞 ) 𝑒 − 𝜋𝑖 x 𝑖 · 𝐻 , (3.4)is the scattering factor, where 𝑛 𝑎 is the number of atoms in a crystal cell, the x 𝑖 ∈ [
0, 1 ] are the coordinates of theatoms within the cell and 𝐹 𝑖 is the atomic form factor [29, 30] of atom 𝑖 . For a given 𝑍 let 𝑄 = supp ( 𝑔 (· , 𝑍 )) ∩ { 𝑞 < } = { 𝑞 , . . . , 𝑞 𝑛 𝑞 } be set of 𝑞 values for which 𝑔 (· , 𝑍 ) is non-zero in the range 𝑞 ∈ [
0, 2 ] , with | 𝑄 | = 𝑛 𝑞 . Then wemodel the 𝐹 curves in our simulations as the Gaussian mixture 𝐹 ( 𝑞 , 𝑍 ) ∝ 𝑛 𝑞 ∑︁ 𝑗 = 𝑔 ( 𝑞 𝑗 , 𝑍 ) 𝑒 − ( 𝑞 − 𝑞𝑗 ) 𝜎 , (3.5)where 𝜎 = − is chosen to be small relative to 𝑞 max so that the Gaussians of (3.5) accurately represent the deltafunctions of (3.3). See figure 5 for plots of the 𝐹 curves for C-graphite, salt and C-diamond. q (inverse Angstroms) no r m a li z ed f ( q ) (a) C-graphite ( 𝑍 = q (inverse Angstroms) no r m a li z ed f ( q ) (b) NaCl ( 𝑍 ≈ q (inverse Angstroms) no r m a li z ed f ( q ) (c) C-diamond ( 𝑍 = Fig. 5. 𝐹 ( 𝑞 ) curve plots. The plots are normalized by 𝐿 ∞ norm (max value). Werestrict the 𝑞 range to [
0, 0.6 ] − (1 − ≈ For comparison, we consider a TV regularized solution, and use the microlocal filtering techniques explained in section2.5 as data pre-processing to remove boundary artifacts. Specifically we aim to findarg min y ∑︁ 𝑘 (cid:2) ( 𝐴 y ) 𝑘 − 𝑏 𝑘 log ( 𝐴 y ) 𝑘 (cid:3) + 𝜆 TV 𝛽 ( y ) , y ∈ [ ∞) 𝑛 × 𝑚 (3.6)where TV 𝛽 ( y ) = 𝑚 ∑︁ 𝑗 = 𝑛 ∑︁ 𝑖 = (cid:16) (∇ 𝑌 ) 𝑖 𝑗 + 𝛽 (cid:17) (3.7)is a smoothed TV norm, where 𝑌 is y reshaped into a 𝑛 × 𝑚 matrix (image), and ∇ 𝑌 is the gradient image. Theparameter 𝛽 > 𝛽 is defined at zero. The smoothing parameter 𝜆 controls the11evel of TV regularization. TV regularization is applied to good effect in the BST literature [1, 3], and thus why it ischosen as a point of comparison. We also combine TV ideas with those of microlocal analysis in section 2.5, to assist inthe reduction of boundary artifacts. To solve 3.6 we implement the code “JR_PET_TV" of [18] with non-negativityconstraints, which is applied in that paper to PET imaging. That is, we input filtered data b (using the filters of section2.5 as pre-processing) to JR_PET_TV, constraining y ∈ [ ∞) 𝑛 × 𝑚 , and choose 𝜆 , 𝛽 for the best results in terms of edge 𝐹 score. We denote this method as “FTV", which stands for Filtered Total Variation. Here we consider the analytic data testing of Algorithm 2.4 by means of a Poisson noise model. That is, we use a scaled,matched model as mean to a multivariate Poisson distribution to generate data, where the scaling factor controls thelevel of noise. We consider two imaging phantoms for reconstruction. See figure 6. The left-hand phantom is comprisedof an NaCl and C-graphite object with centers 𝑥 𝑐 = − 𝑥 𝑐 = 𝑤 = 𝑥 𝑐 = − 𝑥 𝑐 = 𝑤 = y denote the vetorized form of 𝑓 (· , · , 𝑥 ) for a given 𝑥 ∈ [− ] mm. Then the phantoms are normalized to have max value (cid:107) y (cid:107) ∞ = -200 -150 -100 -50 0 50 100 150 200 x q (a) Phantom 1 -200 -150 -100 -50 0 50 100 150 200 x q (b) Phantom 2 Fig. 6. Imaging phantoms.
The analytic data is generated by b 𝜂 ∼ Poisson (cid:18) 𝜂 𝑐 𝑙 b 𝐴 y (cid:107) 𝐴 y (cid:107) (cid:19) , (3.8)where 𝑙 b is the length of b = 𝐴 y . That is, the exact model 𝐴 y is normalized in 𝐿 (by total photon counts) and scaled by 𝜂 𝑐 𝑙 b , then used as mean to a multivariate Poisson distribution. The noisy data b 𝜂 is generated as a Poisson draw. TheEdge 𝐹 score 2DBSR FTV 𝑥 = 𝑥 =
205 .88 .93 𝑥 = −
205 .88 .93 (a) 𝜂 𝑐 =
10 ( 𝜂 ls ≈ Edge 𝐹 score 2DBSR FTV 𝑥 = 𝑥 =
205 .88 .73 𝑥 = −
205 .88 .84 (b) 𝜂 𝑐 = 𝜂 ls ≈ Table 1. Phantom 1 results. The measurements in the left column are given in mm. scaling parameter 𝜂 𝑐 is the average counts per detector and controls the level of noise, i.e., larger 𝜂 𝑐 implies less noise12dge 𝐹 score 2DBSR FTV 𝑥 = 𝑥 =
205 .79 .89 𝑥 = −
205 .79 .89 (a) 𝜂 𝑐 =
10 ( 𝜂 ls ≈ Edge 𝐹 score 2DBSR FTV 𝑥 = 𝑥 =
205 .79 .89 𝑥 = −
205 .79 .89 (b) 𝜂 𝑐 = 𝜂 ls ≈ Table 2. Phantom 2 results. The measurements in the left column are given in mm. -200 -150 -100 -50 0 50 100 150 200 x q (a) 2DBSR -200 -150 -100 -50 0 50 100 150 200 x q (b) FTV Fig. 7. Phantom 1 reconstructions using 2DBSR (left) and FTV (right) along the centralline profile ( 𝑥 = 𝜂 𝑐 = -200 -150 -100 -50 0 50 100 150 200 x q (a) 2DBSR -200 -150 -100 -50 0 50 100 150 200 x q (b) FTV Fig. 8. Phantom 2 reconstructions using 2DBSR (left) and FTV (right) along the centralline profile ( 𝑥 = 𝜂 𝑐 = and vise-versa. We consider two average count levels 𝜂 𝑐 =
10 and 𝜂 𝑐 =
1, which approximately equate to relative leastsquare errors of 𝜂 ls =
15% and 𝜂 ls =
50% respectively, where 𝜂 ls = (cid:107) b − b 𝜂 (cid:107) (cid:107) b (cid:107) . We consider three imaging line profiles 𝑥 ∈ [− ] mm, namely 𝑥 = 𝑥 = − 𝑥 = 𝐴 varies with 𝑥 , sowe are considering three 2-D linear inverse problems with three different 𝐴 operators in total.See tables 1 and 2 for our results using 2DBSR (Algorithm 2.4) and FTV in terms of edge 𝐹 score. In figures 7 and13 we present 2-D image reconstructions along the central line profile ( 𝑥 = 𝜂 𝑐 =
1. The image quality and 𝐹 scores are good and comparible using both methods. However we notice a significantly increased 𝐹 score using FTVon phantom 2. This is because the phantom 2 materials were chosen to lie outside imaging basis used for 2DBSR. Inthis case the 2DBSR algorithm chooses the centers and widths closest to the ground truth, namely 𝑥 𝑐 = ± 𝑤 = 𝐹 score. We could increase the size of the characteristic library to combat this, although this would be ata cost to the level of regularization and the implementation speed. When using FTV, we have no such restrictions onimaging basis. In the Monte Carlo tests conducted in this section, we aim to recover spherical crystalline objects which are centered onthe central ( 𝑥 =
0) line profile. Consider the schematic shown in figure 9. Here we have displayed the attenuationimage ( 𝜇 𝐸 for 𝐸 = 𝑥 = -300 -200 -100 0 100 200 300 x (mm) -300-200-1000100200300 x ( mm ) E ( c m - ) , E = k e V Scanning profile ClutterCrystals (e.g. C, NaCl)
Fig. 9. Test parcel.
We use the imaging parameters (e.g. source/detector positions, source energies) specified in section 3.1. The 𝜖 coordinate of the detector array is determined by Φ (equation 3.1), and is fixed at 𝜖 = Φ ( ) = 𝜖 is the 𝜖 coordinate of the detector array which corresponds to 𝑥 =
0. As discussed in section 3.1, we use29 source energies 𝐸 ∈ [
1, 29 ] keV at 1keV intervals. The source intensity used here is 𝐼 ( 𝐸 ) = × counts perprojection, per energy, which amounts to 1.74 × counts per projection over all 29 energy bins. Equivalently thesource intensity is 2.94 × counts, per cm , per energy bin, at 1m from the source.The clutter is simulated as a 30cm box centered at the origin. The clutter attenuation coefficient is simulated asone-tenth that of cellulose. Cellulose is chosen as a material as it is a primary component of paper and clothing, whichare commonly found in airport luggage and mail parcels. The attenuation coefficient of the cellulose is scaled by 1 /
10 15 20
Sphere radius (mm) E ne r g y ( k e V ) (a) relative 𝜇 error Sphere radius (mm) E ne r g y ( k e V ) (b) absolute Compton signal Fig. 10. Plots of relative, systematic attenuation error (left) and absolute Comptonsignal (right) with 𝐸 and sphere radius ( 𝑟 ). Edge 𝐹 score S1 S2 FTVSt ( 𝑥 𝑐 = 𝑟 =
10) .82 .83 .17St ( 𝑥 𝑐 = 𝑟 =
15) .66 .76 .08
C ( 𝒙 𝒄 = 𝒓 =
15) .64 .73 .05Di ( 𝒙 𝒄 = − 𝒓 = ∗ .60 .71 .42 St ( 𝑥 𝑐 = − 𝑟 = 𝑥 𝑐 = 𝑟 =
5) .73 .70 .54
St ( 𝒙 𝒄 = − 𝒓 = 𝒙 𝒄 = 𝒓 =
10) .59 .66 .10St ( 𝒙 𝒄 = − 𝒓 = 𝒙 𝒄 = 𝒓 = ∗ .57 .52 .30St ( 𝒙 𝒄 = − 𝒓 = 𝒙 𝒄 = 𝒓 =
10) .42 .35 .05
C ( 𝑥 𝑐 = − 𝑟 = 𝑥 𝑐 = 𝑟 =
10) .60 .67 .05
Table 3. 𝐹 scores for each experiment conducted, comparing stage 1 (S1), and stage 2(S2) of 2DBSR (ours) with FTV (literature). The measurements in the brackets in theleft-hand column are in millimeters, and describe the centers ( 𝑥 𝑐 ) and radii ( 𝑟 ) of thespheres scanned. Here “St" denotes NaCl (salt), “C" denotes C-graphite and “Di" isC-diamond. The asterisked entries denote materials which are outside of the imagingbasis. zero.Let 𝑥 𝑐 ∈ [− ] mm denote the sphere center 𝑥 coordinate. The sphere center 𝑥 and 𝑥 coordinates are fixed at 𝑥 = 𝑥 = 𝑟 ∈ [
5, 15 ] mm denote the sphere radius. For example, the spheres shown in figure 9 have center 𝑥 coordinates 𝑥 𝑐 = ± 𝑟 = 𝑥 𝑐 and 𝑟 , for materials contained inside and outside the imaging basis used (as detailed insection 3.1).To generate the data, we use a novel, single scatter Monte Carlo (MC) code developed in Matlab, which we introducein this paper. A step-by-step description of the MC code and a pseudo code is provided in the appendix. The full code isavailable from the authors upon request. The code fully incorporates noise effects due to attenuation and self-absorption,clutter effects and Compton scatter. We do not assume knowledge of the attenuation coefficient in our simulations, andthus there is systematic error in the data due to the neglection of attenuation modeling. In the energy and scattering15
200 -150 -100 -50 0 50 100 150 200 x q (a) ground thruth -200 -150 -100 -50 0 50 100 150 200 x q (b) stage 1 -200 -150 -100 -50 0 50 100 150 200 x q (c) stage 2 q f ( q , , ) reconstructionGT (d) stage 1 q f ( q , , ) reconstructionGT (e) stage 2 Fig. 11. Illustration of two-stage reconstruction process used for 2DBSR. We showreconstructions of an NaCl sphere, center 𝑥 𝑐 = 𝑟 = 𝐹 scores corresponding to each stageare, stage 1 - 𝐹 = 𝐹 = angle range considered ( 𝐸 < 𝜔 < 𝜋 ) the Compton scatter intensity is significantly lower than that of theBragg signal. Hence the Compton scatter will act as an additional background noise in our simulations, along with thePoisson noise due to photon arrivals.In figure 10 we have plotted the systematic attenuation error and Compton signal for a single sphere of Carbon-graphitewith center 𝑥 𝑐 =
0, for varying 𝑟 and 𝐸 . We see in figure 10a that as 𝑟 increases and 𝐸 decreases (towards thebottom-right corner of the figures) the systematic error due to attenuation increases due to higher self-absorption effectsat low energy (fewer photons penetrate) and higher r (those that suffer greater attenuation due to Beer’s Law). We seethe converse effect for the Compton noise in figure 10b as much of the Compton scatter is self-absorbed for low 𝐸 andhigh 𝑟 . This is as we’d expect and figure 10a appears as the negative image of figure 10b (after scaling and translation).So there is a trade-off to consider here, and no size of object or photon energy is necessarily preferred over the other.The MC experiments considered here are designed to reflect a variety of levels of Compton noise and attenuation errorin the data (i.e. we consider a variety of 𝑟 and 𝐸 ).To obtain the 2DBSR reconstructions presented in this section we implement the following two-stage reconstructionprocedure. First, we apply Algorithm 2.4, with the filtered Bragg data b (using the filters specified in section 2.5),characteristic library and hyperparameters (as specified in section 3.1) as input, choosing 𝜆 for the best results in terms ofedge 𝐹 score. Then, we perform a second full pass of Algorithm 2.4, replacing 𝜆 → 𝜆 ×
10. We found that increasing 𝜆 by an order of magnitude on the second pass of Algorithm 2.4 helped to reduce background noise in the reconstruction,once the more significant artifacts (e.g. boundary artifacts) had been removed in the first pass of Algorithm 2.4. Toshow how this works, see figure 11 for image reconstructions of an NaCl sphere, center 𝑥 𝑐 = 𝑟 = 𝐹 score as we advance from16DBSR (S2) FTV G T -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q - D r ec on s t r u c ti on -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q L P ( 𝑥 = mm ) q f ( q , , ) reconstructionGT q f ( q , , ) reconstructionGT Fig. 12. C-graphite sphere reconstruction, 𝑥 𝑐 = 𝑟 = G T -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q - D r ec on s t r u c ti on -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q L P ( 𝑥 = mm ) q f ( q , , ) reconstructionGT q f ( q , , ) reconstructionGT Fig. 13. C-diamond sphere reconstruction, 𝑥 𝑐 = − 𝑟 = G T Ground truth -200 -150 -100 -50 0 50 100 150 200 x q Ground truth -200 -150 -100 -50 0 50 100 150 200 x q - D r ec on s t r u c ti on -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q L P ( 𝑥 = mm ) q f ( q , , ) reconstructionGT q f ( q , , ) reconstructionGT L P ( 𝑥 = − mm ) q f ( q , , ) reconstructionGT q f ( q , - , ) reconstructionGT Fig. 14. NaCL and C-diamond sphere reconstruction, centers 𝑥 𝑐 = − 𝑥 𝑐 = 𝑟 = G T -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q - D r ec on s t r u c ti on -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q L P ( 𝑥 = mm ) q f ( q , , ) reconstructionGT q f ( q , , ) reconstructionGT L P ( 𝑥 = − mm ) q f ( q , , ) reconstructionGT q f ( q , - , ) reconstructionGT Fig. 15. NaCL and C-graphite sphere reconstruction, centers 𝑥 𝑐 = − 𝑥 𝑐 = 𝑟 = G T -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q - D r ec on s t r u c ti on -200 -150 -100 -50 0 50 100 150 200 x q -200 -150 -100 -50 0 50 100 150 200 x q L P ( 𝑥 = mm ) q f ( q , , ) reconstructionGT q f ( q , , ) reconstructionGT L P ( 𝑥 = − mm ) q f ( q , , ) reconstructionGT q f ( q , - , ) reconstructionGT Fig. 16. Two NaCl spheres reconstruction, centers 𝑥 𝑐 = − 𝑥 𝑐 = 𝑟 = 𝜆 for FTV for the best results, increasing 𝜆 → 𝜆 ×
10 gave the same result (i.e. the best result forFTV over all 𝜆 ).In table 3 we have presented our results in terms of edge 𝐹 score for the nine MC experiments conducted. In table 3we use the shorthand notation “St" for NaCl (salt), “C" for C-graphite and “Di" for C-diamond. In brackets we specifythe centers and radii of the spheres in millimeters. For example, “St ( 𝑥 𝑐 = 𝑟 = 𝑥 𝑐 = 𝑟 = 𝑥 𝑐 = − 𝑟 = 𝑥 𝑐 = 𝑟 = ± 𝑟 = 𝐹 score overall, when compared to S1.In table 3 we have highlighted five rows in bold. These correspond to the image reconstructions in figures 12-16,which are chosen to highlight the best and worst results using our method (at stage 2), for materials within and outsideof the imaging basis. In figures 12-16, “GT" denotes the “Ground Truth", and “LP" denotes “Line Profile". We specifythe line profile equation in brackets in each case. In figure 12 we have presented image reconstructions of a C-graphitesphere, center 𝑥 𝑐 = 𝑟 = 𝑞 direction, and we see a shifting of the Braggpeaks in the 𝑥 = 𝑓 when using 2DBSR (as in equation (2.10)), the 2DBSRalgorithm does not allow for such structured distortions in the reconstruction. Hence we see a complete removal ofthe artifacts in the 2DBSR reconstruction, with an increase in 𝐹 score as a result. In figure 13 we see a similar effectalthough the 𝐹 using 2DBSR is decreased as, in this case, the C-diamond sphere is out of basis. The 𝐹 score usingFTV is improved however, and the shifting of the Bragg peaks is reduced in the reconstruction. This is because theattenuation error is decreased due to the smaller sphere radius ( 𝑟 = 𝑥 𝑐 = − 𝑥 𝑐 = 𝑟 = 𝐹 score ( 𝐹 = 𝜆 (the 𝐿 regularization parameter) and running further iterations of 2DBSR as in the second stage of the two-stagereconstruction process outlined above, with the idea that the sparse regularizers would remove the noise and focus onthe more significant (true) Bragg peaks in the reconstruction. The centers and widths of the spheres in the 2DBSRreconstruction are recovered correctly, and the 𝐹 score is acceptable. Furthermore, the results using 2DBSR are muchimproved over FTV, where we see severe artifacts in the reconstruction, with low 𝐹 score ( 𝐹 = 𝑓 ( 𝑞 , 𝑥 ) reconstructions of a NaCl and C-graphite sphere with centers 𝑥 𝑐 = − 𝑥 𝑐 = 𝑟 = 𝐹 score using both methods. . The reason for the poor image qualityis that the scattered signal contribution from the Carbon-graphite sphere is more than twice that of the salt sphere. Thesum of counts (taken over all 𝑝 = × , and the sum of thecounts from the Carbon-graphite sphere is 1.7 × . The NaCl scatter therefore accounts for 29% of the total signal.As the signal from the NaCl sphere is largely hidden by that of the Carbon, this makes the NaCl 𝑓 ( 𝑞 , 𝑥 ) function moredifficult to recover, and it is set close to zero by 2DBSR and FTV. Similar effects are seen in conventional X-ray CT, forexample, in head scans, where the bone and skull shows up strongly in the image (due to higher absorption) and theskull interior and soft tissue are harder to see [18, figure 1].In figure 16 we show 𝑓 ( 𝑞 , 𝑥 ) reconstructions of two NaCl spheres with centers 𝑥 𝑐 = − 𝑥 𝑐 = 𝑟 = 𝑟 (i.e. as the attenuation errordecreases). This observation is supported by the results of table 3, as the 𝐹 scores using FTV increase (and become22ore competitive with 2DBSR) with decreasing 𝑟 . Thus it seems that the main cause of error in the FTV reconstructionsis due to attenuation modeling, based on the error analysis of figure 10.
4. Conclusions and further work
Here we have presented the 2DBSR algorithm - a novel reconstruction technique for two-dimensional Bragg scatterimaging. The regularization penalties we applied are based on ideas in multibang control and compressive sensing. Wealso incorporated filtering ideas from microlocal analysis for data pre-processing, with the aim to suppress boundarytype artifacts (e.g. such as those observed in [16]) in the reconstruction. In section 2.3 we formalized our approachand detailed the 2DBSR algorithm in section 2.4. The microlocal filters used for data pre-processing were defined aspolynomials in 𝐸 in section 2.5. In section 3 we designed and conducted a variety of BST experiments, using analyticand Monte Carlo data, which are of interest in airport baggage screening and threat detection. Here we compared theperformance of 2DBSR to a Filtered Total Variation (FTV) approach. In the analytic data experiments both methodsperformed well, with FTV slightly outperforming 2DBSR in terms of 𝐹 score, in most cases. In the Monte Carloexperiments, 2DBSR was shown to offer significantly higher performance than that of FTV in all cases considered (nineMC experiments in total). In the FTV reconstructions we saw severe image artifacts due to errors in the attenuationmodeling. The artifacts appeared in the image as a shifting/distorting of the ground truth Bragg spectra in the positive 𝑞 direction, which caused a reduction in 𝐹 score. In the majority of cases, the 2DBSR algorithm was successful incombatting the image artifacts due to attenuation modeling (among other errors, such as those due to Compton scatter),and the 𝐹 score was much improved over FTV.In future work we aim to consider joint electron density/Bragg spectra reconstruction using a combination of Braggand Compton scatter data. Here we aim to use the models from the Compton scatter tomography literature [31–37] toimprove the forward model 2.1, and the recovery of 𝑓 ( 𝑞 , 𝑥 ) in 𝑥 space. That is, the Compton data will be primarilyused for density recovery and locating the crystallites (i.e. recovering the centers and widths of the materials), with therecovery in 𝑞 space coming mainly from the Bragg data. We expect that such ideas will be effective in combatting themore significant errors observed in figures 15 and 16 in the 2DBSR reconstructions, where we saw the zeroing out of theNaCl sphere (in figure 15) and blurring of the ground truth Bragg spectra (in figure 16), which yielded poor 𝐹 scores. Funding
This material is based upon work supported by the U.S. Department of Homeland Security Science and TechnologyDirectorate, Office of University Programs, under Grant Awards 18STEXP00001-03-02, formerly 2013-ST-061-ED0001and 70RSAT19FR0000155. The views and conclusions contained in this document are those of the authors and shouldnot be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department ofHomeland Security.
Acknowledgments
We gratefully acknowledge the financial support from the U.S. Department of Homeland Security Science and TechnologyDirectorate.
Disclosures
The authors declare no conflicts of interest.
Data availability statement
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from theauthors upon reasonable request.
Appendix. Description of Monte Carlo code
Consider the scattering event pictured in figure 17. Let 𝐿 x x = { x + 𝑡 ( x − x ) : 𝑡 ∈ R } be the line through x and x , andlet us consider the scattering interaction space Ω = { NI, PE, INC, COH } , where NI denotes “no interaction" (transmittedphotons), PE denotes “photoelectric absorption", INC denotes “incoherent scatter", and COH denotes “coherent scatter".23 𝑠 𝐸𝜔 𝑥 𝑥 𝑥 v sx Fig. 17. A scattering event occurs at a scattering site x , for photons emitted from asource s and recorded at a detector d (displayed as a small square in the 𝑥 𝑥 plane).The initial photon energy is 𝐸 and the scattered energy is 𝐸 𝑠 . Here v is the directionnormal to the detector surface. The scattering angle is 𝜔 . Let 𝜇 ( 𝐸 , 𝑍 ) = (cid:205) E ∈ Ω /{ NI } 𝜇 E ( 𝐸 , 𝑍 ) denote the total attenuation coefficient, where 𝜇 E is the attenuation coefficient forthe interaction type E ∈ Ω /{ NI } , and 𝑍 is the atomic number. Then the Monte Carlo algorithm, for a single photontrial, scattering site ( x ) and source position ( s ), reads as follows:1. Sample the initial photon energy 𝐸 from the source spectrum. The photon is emitted from s and travels in thedirection x − s towards x .2. Sample from a Bernoulli distribution, where 𝑒 − ∫ 𝐿 sx 𝜇 ( 𝐸 , 𝑍 ) is the probablity of success, to decide if the photonreaches x .3. Determine the photon interaction by sampling from a finite distribution on Ω , with 𝑃 ( NI ) = 𝑒 − 𝜇 ( 𝐸 , 𝑍 ) and 𝑃 (E) = ( − 𝑃 ( NI )) 𝜇 E ( 𝐸 , 𝑍 ) 𝜇 ( 𝐸 , 𝑍 ) (.1)for E ∈ Ω /{ NI } .4. Sample 𝜔 ∈ [ 𝜋 ] (polar angle of the scatter direction) from the differential cross section distribution for thegiven interaction type, keeping a uniform spread over each scattering cone, namely the cone with central axisdirection x − s (the dashed line in figure 17), vertex x and opening angle 𝜔 . That is we sample the azimuth angleof the scatter direction (in [
0, 2 𝜋 ] ) from a uniform distribution.5. If the scattered photon travels in the direction d − x towards d , sample from a second Bernoulli distribution, with 𝑒 − ∫ 𝐿 xd 𝜇 ( 𝐸 𝑠 , 𝑍 ) as the probablity of success, to decide whether the photon reaches d .6. If the scattered photon reaches d , count one photon with energy 𝐸 𝑠 , else continue. Repeat for all detectors in thearray.The psuedo code detailed above is applied to the geometry of figure 1, with the differential cross section distribution setto one of the Bragg spectra of figure 5, depending on the material. For Compton scatter, the differential cross section isset to the Klein-Nishina distribution [38, 39]. To generate the data used in section 3.6, we expand the single trial, singlescatter site code to multiple scattering events along lines parallel to 𝑥 (e.g. 𝐿 as pictured in figure 1).24 eferences
1. J. W. Webber and E. L. Miller, “Bragg scattering tomography,” arXiv preprint arXiv:2004.10961 (2020).2. M. Hassan, J. A. Greenberg, I. Odinaka, and D. J. Brady, “Snapshot fan beam coded aperture coherent scatter tomography,” Opt. express ,18277–18289 (2016).3. J. A. Greenberg, K. Krishnamurthy, and D. Brady, “Snapshot molecular imaging using coded energy-sensitive detection,” Opt. express ,25480–25491 (2013).4. K. MacCabe, K. Krishnamurthy, A. Chawla, D. Marks, E. Samei, and D. Brady, “Pencil beam coded aperture x-ray scatter imaging,” Opt. Express , 16310–16320 (2012).5. J. A. Greenberg, M. Hassan, K. Krishnamurthy, and D. Brady, “Structured illumination for tomographic x-ray diffraction imaging,” Analyst ,709–713 (2014).6. J. A. Greenberg, K. Krishnamurthy, M. Lakshmanan, K. MacCabe, S. Wolter, A. Kapadia, and D. Brady, “Coding and sampling for compressivex-ray diffraction tomography,” in Wavelets and Sparsity XV, vol. 8858 (International Society for Optics and Photonics, 2013), p. 885813.7. J. A. Greenberg, M. N. Lakshmanan, D. J. Brady, and A. J. Kapadia, “Optimization of a coded aperture coherent scatter spectral imaging systemfor medical imaging,” in
Medical Imaging 2015: Physics of Medical Imaging, vol. 9412 (International Society for Optics and Photonics, 2015), p.94125E.8. L. Rencker, F. Bach, W. Wang, and M. D. Plumbley, “Sparse recovery and dictionary learning from nonlinear compressive measurements,” IEEETransactions on Signal Process. , 5659–5670 (2019).9. X. Li and S. Luo, “A compressed sensing-based iterative algorithm for ct reconstruction and its possible application to phase contrast imaging,”Biomed. engineering online , 73 (2011).10. E. Herrholz, D. Lorenz, G. Teschke, and D. Trede, Sparsity and Compressed Sensing in Inverse Problems (Springer International Publishing,Cham, 2014), pp. 365–379.11. C. Clason and K. Kunisch, “Multi-bang control of elliptic systems,” in
Annales de l’IHP Analyse non linéaire, vol. 31 (2014), pp. 1109–1130.12. C. Clason and K. Kunisch, “A convex analysis approach to multi-material topology optimization,” ESAIM: Math. Model. Numer. Analysis ,1917–1936 (2016).13. C. Clason, F. Kruse, and K. Kunisch, “Total variation regularization of multi-material topology optimization,” ESAIM: Math. Model. Numer.Analysis , 275–303 (2018).14. J. Frikel and E. T. Quinto, “Characterization and reduction of artifacts in limited angle tomography,” Inverse Probl. , 125007 (2013).15. L. Borg, J. Frikel, J. S. Jørgensen, and E. T. Quinto, “Analyzing reconstruction artifacts from arbitrary incomplete x-ray ct data,” SIAM J. onImaging Sci. , 2786–2814 (2018).16. J. W. Webber and E. T. Quinto, “Microlocal analysis of generalized radon transforms from scattering tomography,” arXiv preprint arXiv:2007.00208(2020).17. W. H. Bragg and W. L. Bragg, “The reflection of x-rays by crystals,” Proc. Royal Soc. London. Ser. A, Containing Pap. a Math. Phys. Character , 428–438 (1913).18. M. J. Ehrhardt, K. Thielemans, L. Pizarro, D. Atkinson, S. Ourselin, B. F. Hutton, and S. R. Arridge, “Joint reconstruction of pet-mri by exploitingstructural similarity,” Inverse Probl. , 015001 (2014).19. R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on scientific computing , 1190–1208 (1995).20. A. Kak and M. Slaney, “Principles of computerized tomographic imaging ieee press,” New York (1988).21. C. K. Egan, S. D. Jacques, T. Connolley, M. D. Wilson, M. C. Veale, P. Seller, and R. J. Cernik, “Dark-field hyperspectral x-ray imaging,” Proc.Royal Soc. A: Math. Phys. Eng. Sci. , 20130629 (2014).22. J. Webber, E. T. Quinto, and E. L. Miller, “A joint reconstruction and lambda tomography regularization technique for energy-resolved x-rayimaging,” Inverse Probl. (2020).23. H. Andrade-Loarca, G. Kutyniok, and O. Öktem, “Shearlets as feature extractor for semantic edge detection: The model-based and data-drivenrealm,” arXiv preprint arXiv:1911.12159 (2019).24. A. A. Taha and A. Hanbury, “Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool,” BMC medical imaging , 29(2015).25. J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis machine intelligence pp. 679–698 (1986).26. C. Zhu, C. Arson, and A. Pouya, “Theoretical study of damage accommodation in salt subject to viscous fatigue,” in Mechanical Behaviour ofSalt VIII, (CRC Press, 2015), pp. 331–342.27. R. Bryan, “International tables for crystallography. Vol. C. Mathematical, physical and chemical tables edited by A. J. C. Wilson,” (1993).28. J. J. DeMarco and P. Suortti, “Effect of scattering on the attenuation of X-rays,” Phys. Rev. B , 1028 (1971).29. J. Hubbell, W. J. Veigele, E. Briggs, R. Brown, D. Cromer, and d. R. Howerton, “Atomic form factors, incoherent scattering functions, and photonscattering cross sections,” J. physical chemical reference data , 471–538 (1975).30. J. H. Hubbell and I. Overbo, “Relativistic atomic form factors and photon coherent scattering cross sections,” J. Phys. Chem. Ref. Data , 69–106(1979).31. V. Palamodov, “An analytic reconstruction for the Compton scattering tomography in a plane,” Inverse problems , 125004 (2011).32. M. Nguyen and T. Truong, “Inversion of a new circular-arc Radon transform for Compton scattering tomography,” Inverse Probl. , 065005(2010).33. G. Rigaud, M. K. Nguyen, and A. K. Louis, “Novel numerical inversions of two circular-arc Radon transforms in Compton scattering tomography,”Inverse Probl. Sci. Eng. , 809–839 (2012).34. T. T. Truong and M. K. Nguyen, “New properties of the V-line Radon transform and their imaging applications,” J. Phys. A: Math. Theor. ,405204 (2015).35. T. T. Truong, M. K. Nguyen, and H. Zaidi, “The mathematical foundations of 3D Compton scatter emission imaging,” Int. journal biomedicalimaging (2007).36. G. Rigaud and B. N. Hahn, “3D Compton scattering imaging and contour reconstruction for a class of Radon transforms,” Inverse Probl. , , 015006 (2015).38. O. Klein and Y. Nishina, “The scattering of light by free electrons according to dirac’s new relativistic dynamics,” Nature , 398–399 (1928).39. O. Klein and Y. Nishina, “Über die streuung von strahlung durch freie elektronen nach der neuen relativistischen quantendynamik von Dirac,”Zeitschrift für Physik , 853–868 (1929)., 853–868 (1929).