ANN-assisted CoSaMP Algorithm for Linear Electromagnetic Imaging of Spatially Sparse Domains
Ali I. Sandhu, Salman A. Shaukat, Abdulla Desmal, Hakan Bagci
11 A Neural Network Assisted Greedy Algorithm ForSparse Electromagnetic Imaging
A. I. Sandhu , S. A. Shaukat , A. Desmal , and H. Ba˘gcı Division of Computer, Electrical, and Mathematical Science and Engineering (CEMSE), King AbdullahUniversity of Science and Technology (KAUST), Thuwal, Saudi Arabia, 23955-6900.email: [email protected]; [email protected] Department of Electrical Engineering, Higher Colleges of Technology, (HCT), Ras Al-Khaimah, UAE.email: [email protected]
Abstract —Greedy pursuit algorithms (GPAs), are well appre-ciated candidates for accurate and efficient reconstruction ofsparse signal and image processing applications. Even thoughmany electromagnetic (EM) imaging applications are naturallysparse, GPAs have rarely been explored for this purpose. This isbecause, for accurate reconstruction, GPAs require (i) the exactnumber of non-zeros, k , in the unknown to be reconstructed.This information is not available a-priori for EM imaging appli-cations, and (ii) the measurement matrix to satisfy the restrictedisometric property (RIP), whereas the EM scattering matrixwhich is obtained by sampling the Green’s function betweenmeasurement locations and the unknowns does not satisfy theRIP. To address the aforementioned limitations, two solutions areproposed. First, an artificial neural network (ANN) is trained onsynthetic measurements, such that given a set of measurements,the ANN produces an estimate of k . Second, Tikhonov secondnorm regularization term is added to the diagonal elements of thescattering matrix, which scales the eigenvalues of the scatteringmatrix such that it satisfies the RIP. The CoSaMP algorithm,which is at the heart of GPAs, is then applied, to accuratelyand efficiently reconstruct the unknown. The proposed schemeimplicitly imposes the sparsity constraint, as the regularizationparameter is specified by the ANN, hence no additional tuningis required from the user. Numerical results demonstrate theefficiency and superiority of the proposed scheme. I. I
NTRODUCTION
Compressed sensing (CS) [1] has introduced several newapproaches for sparse reconstruction in areas of signal andimage processing [1], [2]. Given a reconstruction problem,a CS algorithm seek for the sparsest approximation to thesolution, while requiring the cardinality of the solution (i.e.measured by it L -norm to be the minimum). It is well knownthat a direct solution of an L - constraint minimizationproblem is not feasible [3], however greedy pursuit basedalgorithms (GPAs), under certain conditions provide a well-approximated solution to the L - constraint linear inverseproblems [3], [4]. These algorithms works by successivelyidentifying, single or multiple, locally optimal candidate(s)that could best represent the signal at a given stage, with thehope to approximate a global optimal solution in a reasonabletime. The algorithms that are most widely in use in the imageand signal processing community, includes but not limited toorthogonal matching pursuits (OMP) [5], regularized OMP [4]and compressive sampling matching pursuit (CoSaMP) [3]. Even though many electromagnetic (EM) imaging applicationsare naturally sparse, such as nondestructive testing, crack de-tection and hydrocarbon reservoir exploration, the applicationof GPAs in this area is very limited. This is simply because, forreliable and efficient reconstruction, GPAs require (i) a-prioryinformation about the exact number of non-zero elements, k ,to be reconstructed. This information is not available for EMimaging applications. (ii) the measurement matrix to satisfy therestricted isometric property (RIP) [3]–[5]. The EM scatteringmatrix, which depends upon the physics of the problem andis obtained by sampling the background medium’s Greenfunction between measurement locations and the unknowns,does not satisfy the RIP [6]–[8].CS algorithms have been adopted, within the last decade,for EM imaging applications. In [8], a simultaneous OMPalgorithm is used to resolve targets as a function of averagenumber of transmitters and receivers used. Where on onehand, the contrast levels and reconstruction accuracy were notreported, the algorithm was provided with k , which eliminatesits usefulness in EM imaging framework. In [6], a phase-less reconstruction scheme (i.e. using only the intensity ofscattered fields) is used to image low contrast point likedielectric scatterers. The nonlinearity is alleviated using theBorn approximation and the reconstruction is carried outusing convex (i.e. L ) programing. In a recent work [7],a flexible tree search based OMP algorithm is incorporatedfor the reconstruction of closely spaced, point like scatterers. k is estimated by comparing the data misfit at each stageof the tree search-based approach. Within this framework,there exists a tradeoff between the reconstruction accuracyand computational complexity, which depends upon the searchtree size. Numerical results presented accurate and sharpimage, however the algorithm’s ability to reconstruct mul-tiple connected or large objects is not demonstrated. Thiscould be due to the associated computational complexity ora consequence of the underlying orthogonalization procedureassociated with the OMP algorithm, which refrains it fromrecovering electrical large objects, hence limits its applicabilityin EM imaging problems. Another limitation associated withOMP is that if the very first estimate of the solution componentis incorrect, the algorithm would converge to a local minimumor an entirely incorrect solution. a r X i v : . [ ee ss . SP ] N ov To address the aforementioned limitations, two solutionsare proposed. First, an artificial neural network (ANN) istrained beforehand using the synthetic training set (i.e. syn-thetic measurements) generated using different scatterers, theirorientations, contrast levels and discretization mesh sizes toachieve efficient and reasonable estimate ˆ k of k . Second,Tikhonov second norm regularization term is added to thediagonal elements of the EM scattering matrix, which scalesthe eigenvalues of the scattering matrix. This reduces theeffect of noise on the reconstruction process and enables thescattering matrix to satisfy the RIP. The CoSaMP algorithm,which is at the heart of GPAs, is then applied to solve two-dimensional (2D) EM imaging problems. CoSaMP is moreefficient and accurate in comparison to its OMP based coun-terparts [3]. It works by estimating multiple basis elementsinstead of one at a time and refines the support set iteratively.Consequently, it offers a faster rate of convergence and rendersthe reconstruction of large connected objects, which is alimitation usually associated with the OMP based algorithms.The advantages of this Tikhonov and ANN-enhancedCoSaMP algorithm are threefold: (i) the CoSaMP estimatesthe unknowns by solving a least squares problem on therefined support set (i.e. it only consider ˆ k out of N , where N is the total number of unknowns, columns of the scatteringmatrix whose indices are identified in the support estimationstep), consequently it offers significant computational savingsin contrast to solving the least squares problem involvinga full scattering matrix, (ii) it does not require tuning of athresholding parameter [9], [10] since the sparsity parameter ˆ k is implicitly determined by the ANN. It is important tonote here that many different ANN architectures based onconvolutional neural networks (CNNs), have been studiedfor EM imaging [11], [12]. Their work is based on trainingthe network (mainly U-Net architectures) with a set of ap-proximated contrast profiles either originated from first orderapproximation, e.g. first order Born approximation and backpropagation, or smooth images that does not contribute higherfrequency components [11], [12]. In such scenarios, the neuralnetwork will work as a regularizer that restores finer imagedetails or higher frequency components. In this work, the ANNdoes not handle at its input, a set of first order profiles, nor itproduce the image, instead it directly handles at its input, themeasurements and generates the sparsity estimate ˆ k , and (iii)the reconstructed images are more accurate and sharper thanthose produced by smoothness promoting inverse algorithms.II. F ORMULATION
A. Electromagnetic Formulation and Discretization
Let S represent the support of a 2D inhomogeneous investi-gation domain residing in an unbounded background medium.The permittivity and permeability in S and in the backgroundmedium are { ε ( r ) , µ } and { ε , µ } , respectively. It is as-sumed that S is illuminated by N T line source transmitterswhich generate TM incident fields, E inc i ( r ) , where the subscript i traces the transmitters, i = 1 , .., N T . Upon excitation by E inc i ( r ) , secondary electric current density induces on S which Fig. 1. Description of the 2D EM inversion problem. in turn generates the scattered electric field E sca i ( r ) , whichsatisfies [22]: E sca i ( r ) = k (cid:90) S τ ( r (cid:48) ) E tot i ( r (cid:48) ) G ( r , r (cid:48) ) ds (cid:48) . (1)The scattered field is measured away from S at N R re-ceivers located at r R m , m = 1 , . . . , N R . Here, G ( r , r (cid:48) ) = H ( k | r − r (cid:48) | ) / (4 j ) is the 2D scalar Green’s function, k = ω √ ε µ is the wavenumber, τ ( r ) = ε ( r ) /ε − is thecontrast relative to the background medium and E tot i ( r ) isthe total electric field inside S . To solve (1) numerically, S is discretized using N square cells, s.t. S = N (cid:83) n =1 S n , andinside each cell, the contrast and total electric field is assumedconstant and simply expanded using pulse basis functions.The resulting equations are evaluated at the receiver locations,which yielded the following discretized systems ¯ E sca i = ¯¯ G ¯¯ D { ¯ E tot i } ¯ τ (2)here, ¯¯ D { ¯ E tot i } represents a diagonal matrix with the samplesof total electric field on its diagonal and the entries of matrix ¯¯ G are { ¯ G } m,n = k (cid:82) S n G ( r R m , r (cid:48) ) ds (cid:48) . Prior to analyzing thereconstruction efficiency of the unknown contrast by applyingthe CoSaMP algorithm to (2), the scheme to estimate thesparsity level is discussed below. B. Sparsity Estimation
Useful prior information, about the object of interest un-der test, helps adopting efficient regularization and imagingtechniques, thereby reduces the computational complexity andincreases the reconstruction accuracy. Albeit greedy algo-rithms are super-efficient in recovering sparse signals, theirreconstruction accuracy severely degrades if k is not known a-priory. This is a typical case with the EM imaging frameworks;consequently, greedy algorithms have rarely been explored insolving EM imaging problems. To this end, in this work, anANN is trained and utilized to provide a good estimate ˆ k .More precisely, in this work, a simple two-layer feedforwardperceptron network is incorporated Fig. 2(a) The proposednetwork handles at its input layer, directly the measuredelectric field values. The training set is synthetically generatedusing the 2D volume integral equation solver formulated inthe previous section. The scattered electric field corresponds (a) epoch m ea n s qu a r e d e rr o r Training ErrorValidation Error (b)(c) (d)(e) (f)(g) (h)Fig. 2. (a) Sparsity estimation framework. (b) Convergence of training andvalidation error in the ANN, and (c) testing analysis of the ANN. to uniformly distributed scatterers (i) of different shapes, in particular, circular rings with random radii, single anddouble cylinders with random radii, and with varied separationdistance, (ii) with varying contrast levels i.e. . , . , . and . , and (iii) with twofold and fourfold discretization elements N i.e N ∈ { , , } . A total of trainingexamples are synthetically generated, out of which ofthe set is used for training and remaining is used fortesting the ANN. The training step minimizes the cost, whichis the mean squared difference between the normalized (i.e.normalization is done with respect to the discretization size N , such that the training is discretization independent), k andcontrast level in the investigation domain. It is important tonote here that, the estimation of the contrast level at the ANNoutput, is merely to train the ANN better (i.e. it is observed thatadding more information at the ANN output, while training,produces accurate results over a wider range of examples),and this additional information is not used with the integratedCoSaMP algorithm to reconstruct the profile.Fig.2(b) plots the convergence of the cost function whiletraining and validating the ANN, in a mean squared errorsense, with respect to the number of epochs. Once trained, theperformance of the ANN on the testing set, can be analyzedconsidering following five factors, (i) the contrast in the inves-tigation domain, which ranges from . to . (ii) number ofdiscretization elements N where N ∈ { , , } (iii)number of pixels in error i.e. the absolute difference betweentrue and predicted number of non-zeros in the investigationdomain, i.e. | k − ˆ k | (iv) minimum size of the scatterer inpixels, i.e. k , when the ANN produced a certain error inthe estimate ˆ k , and (v) from the entire testing set, for howmany examples the ANN produced a certain error. A compactdemonstration with respect to all these parameters is notfeasible, the histograms which could demonstrate factor (v) foreach contrast as well as discretization size, are omitted here.This is simply to emphasize that for the given experiment itis more appropriate to observe factor (iv), irrespective of howmany examples were in error.In Figures.2(c)-(h), the y -axis represents factor (iv), i.e. theminimum value of k for a given discretization size or contrastin the investigation domain (observable along the x -axis), forwhich the ANN produced a certain error in the estimate ˆ k . Thevalues are also labeled on top of the respective bars. The blue,green and yellow bars represents if k is incorrectly estimatedby ±{ , or } pixels, where the maximum error is ± pixelsover the range of testing examples. For instance, considerFig.2(e), the yellow bar shows that there are a minimum of k = 80 non-zeros with a contrast of . in the investigationdomain having N = 784 elements, when the ANN producedan error of ± pixels. Similarly a anywhere in the barplots represents that for the given experiment, ˆ k = k i.e. nota single example resulted in an incorrect estimate. Fig.2(g)shows similar error performance for fixed N = 784 , butfor contrast levels . and . , which were not used whiletraining. Fig.2(h) demonstrates the prediction accuracy of theANN for the Austria like profiles. This particular testing setconsist of examples, including (i) circular ring with aconstant radius and the two outer cylinders rotated insteps of to generate examples, and (ii) circular rings with varied radii and outer cylinders rotated in steps of ,which contributed another examples. It is important to notehere that, the ANN is trained merely on independent set ofrings, and cylinders, the Austria like testing set is never used inthe training, even then the proposed ANN architecture resultedin a maximum of . For this experiment, N ∈ { , } for a fixed contrast level of . . Several numerical examplesare presented in the results section, which demonstrates thesuperiority of the proposed scheme over other first orderreconstruction techniques. C. CoSaMP Applied to EM Imaging
In this work, CoSaMP [3] is applied to (2) which constitutesa sparse linear inverse problem in the Born approximatedregime [13]. The optimization problem can be formulated as ¯ τ = min ¯ τ (cid:107) ¯ τ (cid:107) s.t. (cid:13)(cid:13)(cid:13) ¯ E meas − ¯¯ H ¯ τ (cid:13)(cid:13)(cid:13) ≤ (cid:15). (3)In the above, ¯ E meas ≈ ¯ E sca + ¯ η where ¯ η contains the samplesof additive white Gaussian noise. The subscript i is omittedsuch that ¯ E meas represents a cascaded vector corresponding toall illuminations. The scattering matrix ¯¯ H has to satisfy theRIP for the CoSaMP algorithm to converge [3], which statesthat for any vector ¯ y there should be δ ∈ (0 , , such that (1 − δ ) (cid:107) ¯ y (cid:107) ≤ (cid:107) ¯¯ H ¯ y (cid:107) ≤ (1 + δ ) (cid:107) ¯ y (cid:107) holds. Note that δ isan open set between and . The infimum value of δ thatsatisfies the RIP is known as the restricted isometric constant(RIC), ˆ δ . For any system that satisfies the RIP with a RIC ˆ δ ,the following holds [3]: (1 − ˆ δ ) ≤ eig ( ¯¯ H ) min ≤ eig ( ¯¯ H ) max ≤ (1 + ˆ δ ) (4)The lower bound on Eq. (4) is not satisfied due to the factthat ¯¯ H is ill-conditioned and eig ( ¯¯ H ) min = 0 , which enforces ˆ δ = 1 , hence breaks the RIP. To address this problem, the datamisfit || ¯ E meas − ¯¯ H ¯ τ || in Eq. (3) is replaced with the secondnorm Tikhonov kind of system which yields ¯ τ = min ¯ τ (cid:107) ¯ τ (cid:107) s.t. (cid:13)(cid:13)(cid:13) ˜¯ E meas − ¯¯ H λ ¯ τ (cid:13)(cid:13)(cid:13) ≤ (cid:15). (5)In Eq. (5), ¯¯ H λ = ¯¯ H † ¯¯ H + λ ¯¯ I where ¯¯ H † is the complexconjugate of ¯¯ H and ˜¯ E meas = ¯¯ H † ¯ E meas . Eq. (5) can also bewritten as ¯ τ = min ¯ τ (cid:107) ¯ τ (cid:107) s.t. (cid:13)(cid:13)(cid:13) ˜¯ E meas − ¯¯ H † ¯¯ H ¯ τ (cid:13)(cid:13)(cid:13) + λ (cid:107) ¯ τ (cid:107) ≤ (cid:15). (6)Eq. (6) represents an optimization problem, not only with thesparsity, but also a second norm regularization term with λ as the regularization parameter. It is known that ¯¯ H is ill-conditioned, so is ¯¯ H † ¯¯ H , however by adding a selected param-eter λ onto the diagonal elements of ¯¯ H † ¯¯ H , the eigenvalues of ¯¯ H λ will be modified to eig ( ¯¯ H † ¯¯ H ) + λ . While λ is a strictlypositive number, eig ( ¯¯ H λ ) min = λ , hence the lower bound ofthe RIP can be satisfied, as ˆ δ can be strictly positive within (0 , . It should be noted here that λ is not introduced topromote smoothness in the solution, but only to satisfy theRIP. The CoSaMP algorithm is applied to Eq. (5) and an approximate sparse solution to Eq. (5) can be sought usingthe following proposed algorithm: Step 1 : Initialize r ← ˜¯ E meas , n ← , ˆ k, λ Step 2 : repeatStep 2 . n ← n + 1Step 2 . y ( n ) ← (cid:107)(cid:104) ¯¯ H, r ( n − (cid:105)(cid:107) Step 2 . ( n ) ← sort (¯ y ( n )1: k )Step 2 . F ( n ) ← Ω ( n ) ∪ supp (¯ τ ( n − )Step 2 . τ ( n ) ← ( ¯¯ H λ : ,F ( n ) ) − ˜¯ E meas Step 2 . r ( n ) ← ˜¯ E meas − ¯¯ H λ ¯ τ ( n ) Several comments about the proposed scheme are in order: Atstep , several parameters are initialized. The sparsity level k is estimated by feeding the measurements to the alreadytrained ANN. The parameter λ can be estimated from thenoise level in the measurements. It makes ¯¯ H λ a full rankmatrix that helps in satisfying the RIP criteria. At step . the residual from the last iteration is projected onto the modelsubspace to determine which components of the unknownmodel are still not determined. At the third step, set Ω ( n ) stores ˆ k column-indices from ¯¯ H which contributed maximallytowards the correlation in step . . At step . , the newlyidentified support set Ω ( n ) is unified with the final support setsupp (¯ τ ( n − ) from the last iteration, in order to eliminate anyrepetitions in the support elements. At step . the solutioncoefficients are estimated by solving a least squares problemover the merged support set F ( n ) . It should be noted herethat, ¯¯ H λ : ,F ( n ) contain only those columns of ¯¯ H whose indicesare in the merged support set F ( n ) . This significantly reducesthe computational cost in contrast to solving the least squaresproblem, which would involve the whole matrix ¯¯ H . Finally, theresidual is updated so that it reflects only the part of unknownsthat has yet not been estimated. This process continues untilsome specified halting criteria is met. The algorithm is setto terminate if the data misfit || ¯ E meas − ¯¯ H ¯ τ || ≤ orthe residual between successive iterations does not changesignificantly i.e. || r ( n ) − r ( n − || / || r ( n ) || ≤ − .III. N UMERICAL RESULTS
This section demonstrates the accuracy and efficiency ofthe proposed scheme via numerical experiments. First, (2)with ¯ τ ref is solved for ¯ E sca i , then dB Gaussian noise isadded to the result to yield ¯ E mea i . Here, { ¯ τ ref } n = τ ref ( r p ) , p = 1 , . . . , N , are the samples of the actual contrast τ ref ( r ) being reconstructed.Three different EM inversion schemes are compared: (i) theFTB-OMP algorithm [7] (ii) first order Born-approximation[14] with soft thresholding [10], and (iii) the algorithmproposed in this work. For all simulations, the quality ofreconstruction is measured using err n = (cid:13)(cid:13) ¯ τ n − ¯ τ ref (cid:13)(cid:13) (cid:107) ¯ τ ref (cid:107) (7)where ¯ τ n stores the samples of the contrast reconstructed atconvergence t . (a) (b) SNR [dB] R e l . E rr o r FTB-OMPCoSaMP (c)Fig. 3. (a) Original investigation domain and the transmitter receiver con-figuration. (b) Reconstructed image using CoSaMP under dB noise, and(c) relative mean square error in the dielectric profile reconstructed usingFTB-OMP and CoSaMP vs. the level os measurement noise. A. Closely Spaced Point Like Targets
The first example is reproduced from a recent article [7],which demonstrates the reconstruction of closely spaced pointlike targets for several SNR values ranging from dB to dB.The investigation domain is extremely sparse and discretizedusing square cells, surrounded by transmitters op-erating at MHz and receivers. It is clear from Fig.3that the proposed CoSaMP algorithm has yielded much higherreconstruction accuracy in comparison to the FTB-OMP al-gorithm over the range of SNR values and moreover thereconstructed image is sharper and accurate. In this example, ˆ k is not estimated using the ANN, instead provided to theCoSaMP algorithm. This is to demonstrate that indeed theCoSaMP algorithm outperforms in comparison to other GPAs,and which is why it is considered in this work. B. Closely Spaced Dielectric Cylinders
The second example demonstrates the reconstruction ofclosely located dielectric cylinders (i.e. multiply connectedobjects) using the proposed scheme. The electrical dimensionof the investigation domain in Fig.4(a), is about λ × λ and is discretized using square cells, surrounded by transmitter and receivers pairs operating at MHz . In thereference profile ˆ k = 72 , and the ANN estimated it exactly.The SNR is maintained at dB. The reconstructed imagesusing CoSaMP and the first order Born approximation areshown in Fig.4(c) and Fig.4(e) respectively. Clearly CoSaMPproduced a sharper and accurate image with a relative error of . As we have already discussed that a maximum error of (a) (b)(c) (d)(e) (f) r e l . e rr o r austriacylinders (g)Fig. 4. (a)-(b) Investigation domain with two pulses and Austria shapedscatterers (as represented by ¯ τ ref ) respectively and the transmitter and receiverlocations. (c)-(e) Reconstruction of two pulses obtained by Born approxima-tion with thresholding and the proposed CoSaMP algorithm respectively. (d)-(f) Solutions for the Austria profile obtained using the Born approximationwith thresholding and the proposed CoSaMP algorithm respectively. (g) Re-construction error err n versus the sparsity estimate ˆ k for both the scatterers,using CoSaMP. ± pixels is observed over all the testing set and even though ˆ k = k for this example, the black curve in Fig.4(g) representsthe accuracy of the proposed scheme over the range of errorin the estimate. C. Austria
The third example is the well-known Austria profile,Fig.4(b). All the simulation parameters, but the discretizationsize, are identical to the second example. The investigationdomain is discretized using square cells. In the referenceprofile ˆ k = 66 , and the ANN estimated it exactly. Thereconstructed images using CoSaMP and the first order Bornapproximation are shown in Fig.4(d) and Fig.4(f) respectively.Clearly CoSaMP produced a sparse and sharper image witha relative error of . As we have already discussed that amaximum error of ± pixels is observed over all the testingdata that is synthesized using variations of Austria, the redcurve in Fig.4(g) represents the accuracy of the proposedscheme over the range of error in the estimate. D. Electrically Large Objects
To demonstrate that the reconstruction efficiency of theproposed algorithm is not limited to electrically very smalltargets, such as those presented in couple of earlier examples,Fig.5(a)-(b) presents reconstructed images of a multi-layeredcylinder whose diameter is on the order of a wavelength andof an L-shaped object respectively. The electrical dimensionsof the investigation domains in Fig.5(a)-(b), are . λ × . λ and are discretized using square cells, surrounded by transmitter and receiver pairs operating at MHz . Thereconstructed images using the CoSaMP, in Fig.5(c)-(d) ,are sharper and accurate with a relative error of and . for the multi-layered cylinder and the L-shaped objectrespectively. (a) (b)(c) (d)Fig. 5. Reference contrast profiles (as represented by ¯ τ ref ) and the transmitterand receiver locations for (a) a coated cylinder, and (b) an L-shaped scatterer.Reconstructed images obtained using CoSaMP for (c) the coated cylinder, and(d) the L-shaped scatterer. IV. C
ONCLUSION
An efficient and robust greedy pursuit-based frameworkis proposed for sparse electromagnetic imaging. To en-able CoSaMP for EM imaging applications, a second normTikhonov kind parameter is added to the diagonal entries ofthe scattering matrix such that the RIP criterion is relaxed.A simple ANN based approach is proposed to estimate thenumber of non-zeros to be reconstructed, which is a crucialinput parameter for the class of greedy algorithms. Numericalresults have demonstrated that the images produced by theproposed framework, for a range of SNR and contrast levels,are sharper and accurate than first order Born approxima-tion. The solutions are fairly accurate for target localizationapplications. It is envisioned that for challenging problems,integration of the proposed scheme could provide a fair initialguess, instead of incorporating an all 0 initial, the analysis ofwhich is currently underway.R
EFERENCES[1] E. J. Candes and M. B. Wakin, “An introduction to compressivesampling,”
IEEE Signal Process. Mag. , vol. 25, no. 2, pp. 21–30, 2008.[2] R. G. Baraniuk, “More is less: signal processing and the data del-uge,”
Science , vol. 331, no. 6018, pp. 717–719, 2011.[3] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery fromincomplete and inaccurate samples,”
Appl. Comput. Harmon. A. , vol. 26,no. 3, pp. 301–321, 2009.[4] D. Needell and R. Vershynin, “Signal recovery from incomplete andinaccurate measurements via regularized orthogonal matching pursuit,”
IEEE J. Sel. Top. Signa. , vol. 4, no. 2, pp. 310–316, 2010.[5] J. A. Tropp and A. C. Gilbert, “Signal recovery from random mea-surements via orthogonal matching pursuit,”
IEEE Trans. Info. Theory ,vol. 53, no. 12, pp. 4655–4666, 2007.[6] X. C. L. Pan and S. P. Yeo, “A compressive-sensing-based phase-lessimaging method for point-like dielectric objects,”
IEEE Trans. AntennasPropag. , vol. 60, no. 11, pp. 5472–5475, 2012.[7] G. K. K. R. V. Senyuva, O. Ozdemir and E. Anarim, “Electromagneticimaging of closely spaced objects using matching pursuit based ap-proaches,”
IEEE Antennas Wirel. Propag. Lett. , vol. 15, pp. 1179–1182,2015.[8] T. C. Ye and S. Y. Lee, “Non-iterative exact inverse scattering usingsimultaneous orthogonal matching pursuit (s-omp),” in .IEEE,2008, pp. 2457–2460.[9] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding al-gorithm for linear inverse problems with a sparsity constraint,”
Commun.Pure Appl. Math. , vol. 57, no. 11, pp. 1413–1457, 2004.[10] A. Desmal and H. Bagci, “Shrinkage-thresholding enhanced Born itera-tive method for solving 2D inverse electromagnetic scattering problem,”
IEEE Trans. Antennas Propag. , vol. 62, no. 7, pp. 3878–3884, 2014.[11] e. a. L. Li, “Deepnis: Deep neural network for nonlinear electromagneticinverse scattering,”
IEEE Trans. Antennas Propag. , vol. 67, no. 3, pp.1819–1825, 2018.[12] Z. X. Y. Sun and U. Kamilov, “Efficient and accurate inversion ofmultiple scattering with deep learning,”
Optics express , vol. 26, no. 11,pp.14 678–14 688, 2018.[13] M. Pastorino,
Microwave Imaging . Wiley, 2010.[14] M. Born and W. Wolf,