A fast methodology for large-scale focusing inversion of gravity and magnetic data using the structured model matrix and the 2D fast Fourier transform
AA FAST METHODOLOGY FOR LARGE-SCALE FOCUSING INVERSIONOF GRAVITY AND MAGNETIC DATA USING THE STRUCTUREDMODEL MATRIX AND THE D FAST FOURIER TRANSFORM
Rosemary A. Renaut
School of Mathematical and Statistical SciencesArizona State UniversityTempe, AZ 85287, [email protected]
Jarom D. Hogue
School of Mathematical and Statistical Sciences, Arizona State UniversityTempe, AZ 85287, [email protected]
Saeed Vatankhah
Institute of Geophysics, University of Tehran, Tehran, IranHubei Subsurface Multi-scale Imaging Key Laboratory, Institute of Geophysics and GeomaticsChina University of Geosciences, Wuhan, [email protected]
Abstract.
Focusing inversion of potential field data for the recovery of sparse subsurfacestructures from surface measurement data on a uniform grid is discussed. For the uniformgrid the model sensitivity matrices exhibit block Toeplitz Toeplitz block structure, by blocksfor each depth layer of the subsurface. Then, through embedding in circulant matrices, allforward operations with the sensitivity matrix, or its transpose, are realized using the fasttwo dimensional Fourier transform. Simulations demonstrate that this fast inversion al-gorithm can be implemented on standard desktop computers with sufficient memory forstorage of volumes up to size n ≈ M . The linear systems of equations arising in thefocusing inversion algorithm are solved using either Golub-Kahan-bidiagonalization or ran-domized singular value decomposition algorithms in which all matrix operations with thesensitivity matrix are implemented using the fast Fourier transform. These two algorithmsare contrasted for efficiency for large-scale problems with respect to the sizes of the projectedsubspaces adopted for the solutions of the linear systems. The presented results confirm ear-lier studies that the randomized algorithms are to be preferred for the inversion of gravitydata, and that it is sufficient to use projected spaces of size approximately m/
8, for datasets of size m . In contrast, the Golub-Kahan-bidiagonalization leads to more efficient imple-mentations for the inversion of magnetic data sets, and it is again sufficient to use projectedspaces of size approximately m/
8. Moreover, it is sufficient to use projected spaces of size m/
20 when m is large, m ≈ n ≈ M . Simulationssupport the presented conclusions and are verified on the inversion of a practical magneticdata set that is obtained over the Wuskwatim Lake region in Manitoba, Canada.1991 Mathematics Subject Classification.
MSC: 65F10,
Key words and phrases.
Gravity and magnetic anomalies and Earth structure; Inverse theory; Numericalapproximations and analysis; fast Fourier Transform.April 30, 2020 a r X i v : . [ phy s i c s . g e o - ph ] A p r . Introduction
The determination of the subsurface structures from measured potential field data is im-portant for many practical applications concerned with oil and gas exploration, mining, andregional investigations, Blakely [1995]; Nabighian et al. [2005]. There are many approachesthat can be considered for the inversion of potential field data sets. These range from tech-niques that directly use the inversion of a forward model described by a sensitivity matrixfor gravity and magnetic potential field data, as in, for example, Boulanger and Chouteau[2001]; Farquharson [2008]; Leli`evre and Oldenburg [2006]; Li and Oldenburg [1996, 1998];Pilkington [1997]; Silva and Barbosa [2006] and Portniaguine and Zhdanov [1999]. Otherapproaches avoid the problem with the storage and generation of a large sensitivity matrixby employing alternative approaches, as in Cox et al. [2010]; Uieda and Barbosa [2012];Vatankhah et al. [2019]. Of those that do handle the sensitivity matrix, some techniques toavoid the large scale challenge, include wavelet and compression techniques, Li and Olden-burg [2003]; Portniaguine and Zhdanov [2002] and Voronin et al. [2015]. Of interest here isthe development of an approach that takes advantage of the structure that can be realizedfor the sensitivity matrix, and then enables the use of the fast Fourier transform for fastmatrix operations, and avoids the high storage overhead of the matrix.The efficient inversion of three dimensional gravity data using the 2D fast Fourier transform( ) was presented in the Master’s thesis of [Bruun and Nielsen, 2007]. There, it wasobserved that the sensitivity matrix exhibits a block Toeplitz Toeplitz block (
BTTB ) structureprovided that the data measurement positions are uniform and carefully related to the griddefining the volume discretization. It is this structure which facilitates the use of the via the embedding of the required kernel entries that define the sensitivity matrix within ablock Circulant Circulant block (
BCCB ) matrix, and which is explained in Chan and Jin [2007];Vogel [2002]. Then, Zhang and Wong [2015] used the
BTTB structure for fast computationswith the sensitivity matrix, and employed this within an algorithm for the inversion of gravitydata using a smoothing regularization, allowing for variable heights of the individual depthlayers in the domain. They also applied optimal preconditioning for the
BTTB matrices usingthe approach of Chan and Jin [2007]. Their approach was then optimized by Chen and Liu[2018] but only for efficient forward gravity modeling and with a slight modification in the waythat the matrices for each depth layer of the domain are defined using the approximationof the forward integral equation. In particular, Zhang and Wong [2015] use a multilayerapproximation of the gravity kernel, rather than the derivation of the kernel integral in Li andChouteau [1998]. They noted, however, that their approach is subject to greater potential forerror on coarse-grained domains because it does not use the exact kernel integral developedby Li and Chouteau [1998]. Bruun and Nielsen [2007] also developed an algorithm that iseven more efficient in memory and computation than the use of the
BTTB for each depthlayer by using an upward continuation method to deal with the issue that measured dataare only provided at the surface of the domain. They concluded that this was not suitablefor practical problems. Finally, they also considered the interpolation of data not on theuniform grid to the uniform grid, hence removing the restriction on the uniform placementof measurement stations on the surface, but potentially introducing some error due to theinterpolation. On the other hand, their study did not include Tikhonov stabilization forthe solution of the linear systems, and hence did not implement state-of-the-art approachesfor resolving complex structures with general L p norm regularizers (0 ≤ p ≤ tandard techniques for inclusion of depth weighting, and imposition of constraint conditionswere not considered. The focus of this work is, therefore, a demonstration and validation ofefficient solvers that are more general and can be effectively employed for the independentfocusing inversion of both large scale gravity and magnetic potential field data sets. It shouldbe noted, moreover, that the approach can be applied also for domains with padding, whichis of potential benefit for structure identification near the boundaries of the analyzed volume.First, we note that the fast computation of geophysics kernel models using the fast Fouriertransform ( FFT ) has already been considered in a number of different contexts. These includecalculation using the Fourier domain as in Li et al. [2018]; Zhao et al. [2018], and also byPilkington [1997] in conjunction with the conjugate gradient method for solving the magneticsusceptibility inverse problem. Fast forward modeling of the magnetic kernel on an undulatedsurface, combined with spline interpolation of the surface data was also suggested by Liet al. [2018] using an implementation of the model in the wave number domain. Further,fast forward and high accuracy modeling of the gravity kernel using the Gauss wasdiscussed by Zhao et al. [2018]. Moreover, the derivation of the forward modeling operatorsthat yield the
BTTB structure for the magnetic and gravity kernels in combination withdomain padding and the staggered placement of measurement stations with respect to thedomain prisms at the surface was carefully presented in Hogue et al. [2019]. Hence, here, weonly present necessary details concerning the development of the forward modeling approachAssociated with the development of a focusing inversion algorithm, is the choice of solverwithin the inversion algorithm, the choice of regularizer for focusing the subsurface structures,and a decision on determination of suitable regularization parameters. With respect tothe solver, small scale problems can be solved using the full singular value decomposition(
SVD ) of the sensitivity matrix, which is not feasible for the large scale. Moreover, theuse of the
SVD for focusing inversion has been well-investigated in the literature, see forexample [Vatankhah et al., 2014, 2015], while choices and implementation details for focusinginversion are reviewed in [Vatankhah et al., 2020b]. Furthermore, methods that yield usefulapproximations of the
SVD , hence enabling automatic but efficient techniques for choice ofthe regularization parameters have also been discussed in [Renaut et al., 2017; Vatankhahet al., 2017] when considered with iterative Krylov methods based on the Golub-KahanBidiagonalization (
GKB ) algorithm, [Paige and Saunders, 1982], and in Vatankhah et al. [2018,2020a] when adopted using the randomized singular value decomposition (
RSVD ), [Halkoet al., 2011]. Recommendations for the application of the
RSVD with power iteration, andthe sizes of the projected spaces to be used for both
GKB and
RSVD were presented, but onlywithin the context of problems that can be solved without the use of the . Thus, acomplete validation of these algorithms for the solution of the large scale focusing inversionproblem, with considerations contrasting the effectiveness of these algorithms in the largescale, is still important, and is addressed here.We comment, further, that there is an alternative approach for the comparison of
RSVD and
GKB algorithms, which was discussed by Luiken and van Leeuwen [2020]. The focusthere, on the other hand, was on the effective determination of both the size of the projectedspace and the determination of the optimal regularization parameter, using these algorithms.Their
RSVD algorithm used the range finder suggested in [Halko et al., 2011, Algorithm 4],rather than the power iteration. They concluded with their one rather small example for anunder-determined sensitivity matrix of size 400 by 2500 that this was not successful. The est for the GKB approach was successful for this problem, but it is still rather small scaleas compared to the problems considered here. Instead as stated, we return to the problemof assessing a suitable size of the projected space to be used for large scale inversion ofmagnetic and gravity data, using the techniques that provide an approximate
SVD and henceefficient and automatic estimation of the regularization parameter concurrently with solvinglarge scale problems. We use the method of Unbiased Predictive Risk Estimation (
UPRE ) forautomatically estimating the regularization parameters, as extensively discussed elsewhere,Vogel [2002].
Overview of main scientific contributions.
This work provides a comprehensive study of theapplication of the in focusing inversion algorithms for gravity and magnetic potentialfield data sets. Specifically, our main contributions are as follows. (i) A detailed review of themechanics for the inversion of potential field data using focusing inversion algorithms basedon the iteratively regularized least squares algorithm in conjunction with the solution of linearsystems using
GKB or RSVD algorithms; (ii) The extension of these approaches for the use ofthe for all forward multiplications with the sensitivity matrix, or its transpose; (iii)Comparison of the computational cost when using the as compared to the sensitivitymatrix, or its transpose, directly, when implemented within the inversion algorithm, anddependent on the sizes of the projected spaces adopted for the inversion; (iv) Presentationof numerical experiments that confirm that the
RSVD algorithm is more efficient than the
GKB for the inversion of gravity data sets, for larger problems than previously considered;(v) A new comparison the use of
GKB as compared for
RSVD for the inversion of magneticdata sets, showing that
GKB is to be preferred; (vi) Finally, all conclusions are confirmedby application on a practical data set, demonstrating that the methodology is suitable forfocusing inversion of large scale data sets and can provide parameter reconstructions withmore than 1M variables using a laptop computer.The paper is organized as follows. In Section 2 we present the general methodology usedfor the independent inversion of gravity and magnetic potential field data. The
BTTB detailsare reviewed in Section 2.1 and stabilized inversion is reviewed in Section 2.2. Details forthe numerical solution of the inversion formulation are provided in Section 2.3 and thealgorithms are in Section 2.4. The estimated computational cost of each algorithm, in termsof the number of floating point operations flops is given in Section 2.5. Numerical resultsapplying the presented algorithms to synthetic and practical data are described in Section 3,with the details that apply to all computational implementations given in Section 3.1 andthe generation of the synthetic data used in the simulations provided in Section 3.2. Resultsassessing comparison of computational costs for one iteration of the algorithm for use with,and without, the are discussed in Section 3.3.1. The convergence of the -basedalgorithms for problems of increasing size is discussed in Section 3.3.2. Validating results forthe inversion of real magnetic data obtained over a portion of the Wuskwatim Lake regionin Manitoba, Canada are provided in Section 3.4 and conclusions in Section 4. A providesbrief details on the implementation of the computations using the embedding of the
BTTB matrix in the
BCCB matrix and the , and supporting numerical evidence of the figuresillustrating the results are provided in a number of tables in B.2.
Methodology .1. Forward Model and
BTTB
Structure.
We consider the inversion of measured poten-tial field data d obs that describes the response at the surface due to unknown subsurfacemodel parameters m . The data and model parameters are connected via the forward model(1) d obs = G m , where G is the sensitivity, or model, matrix. This linear relationship is obtained via thediscretization of a Fredholm integral equation of the first kind,(2) d ( a, b, c ) = (cid:90) (cid:90) (cid:90) h ( a, b, c, x, y, z ) ζ ( x, y, z ) dx dy dz, where exact values d and m are the discretizations of continuous functions d and ζ , respec-tively, and G in (1) provides the discrete approximation of the integrals of the kernel function h over the volume cells. For the specific kernels associated with gravity and magnetic data,assuming for magnetic data that there is no remanence magnetization or self-magnetization, h is spatially invariant in all dimensions, h ( a, b, c, x, y, z ) = h ( x − a, y − b, z − c ) and (2)describes a convolution operation.Using the formulation of the integral of the kernel as derived by Ha´az [1953]; Li andChouteau [1998] for the gravity kernel, and by Rao and Babu [1991] for the magnetic kernel,sensitivity matrix G decomposes by column blocks as(3) G = [ G (1) , . . . , G ( n z ) ] , where block G ( r ) is for the r th depth layer. The individual entries in G correspond to theprojections of the contributions from prisms c pqr in the volume to measurement stations,denoted by s ij , at or near the surface. The configurations of the volume and measurementdomains are illustrated in Figure 1. Here it is assumed that the measurement stations are allon the surface with coordinates ( a i , b j ,
0) in ( x, y, z ). Prism c pqr of the domain has dimensions∆ x , ∆ y and ∆ z in x , y and z directions with coordinates that are integer multiples of ∆ x ,∆ y and ∆ z , and is indexed by 1 ≤ p ≤ s x + p x L + p x R = n x , 1 ≤ q ≤ s y + p y L + p y R = n y ,and 1 ≤ r ≤ n z . This indexing assumes that there is padding around the domain in x and y directions by additional borders of p x L , p x R , p y L and p y R cells. The distinction between thepadded and unpadded portions of the domain is that there are no measurement stations inthe padded regions. This yields G ∈ R m × n where m = s x s y , and n = n x n y n z , and each G ( r ) ∈ R m × n r , where n r = n x n y .In (3), m ≤ n r (cid:28) n and the system is drastically underdetermined for any reasonablediscretization of the depth ( z ) dimension of the volume. Moreover, when n is large theuse of the matrix G requires both significant computational cost for evaluation of matrix-matrix operations and significant storage. Without taking account of structure in G , andassuming that a dot product of real vectors of length n requires 2 n floating point operations( flops ), calculating GH , for H ∈ R n × p , takes O (2 nmp ) flops and storage of matrix G uses approximately 8 mn × e − GB . For example, suppose p = m = n/ n = 10 ,then storage of G requires approximately 1000GB, and the single matrix multiplicationuses ≈ / flops or 10 G flops , without any consideration of additional software andsystem overheads. These observations limit the ability to do large scale stabilized inversionof potential field data in real time using current desktop computers, or laptops, without We assume one double floating point number requires 8 bytes and note 1 byte is 10 − GB. -1 q-1 r-1 (x ,y ,z ) p q r (x ,y ,z ) y x z y s s x pqr c n z ! ! ! ! ! ! ! ! ! ! ! ! (a ij ,b ij ) Figure 1.
The configuration of prism c pqr , 1 ≤ p ≤ s x + p x L + p x R = n x ,1 ≤ q ≤ s y + p y L + p y R = n y , 1 ≤ r ≤ n z , in the volume relative to a stationon the surface at location s ij = ( a ij , b ij ), 1 ≤ i ≤ s x , 1 ≤ j ≤ s y . Here thestations are shown as located at the centers of the cells on the surface of thedomain and that there are no measurements taken in the padded portion ofthe domain.taking into account any further information on the structure of G . This is the topic of thefurther discussion here.Bruun and Nielsen [2007] observed that the configuration of the locations of the stations inrelation to the domain discretization is significant in generating G ( r ) with structure that canbe effectively utilized to improve the efficiency of operations with G and to reduce the storagerequirements. Assuming that the stations are always placed uniformly with respect to thedomain prisms, and provided that the distances between stations are fixed in x and y , thenmatrix G ( r ) for the gravity kernel has symmetric BTTB structure (
SBTTB ). Then, it is possibleto embed G ( r ) in a BCCB matrix and matrix operations can be efficiently performed usingthe , as explained in [Vogel, 2002]. This structure was also discussed and then utilizedfor efficient forward operations with G in Chen and Liu [2018]. There it was assumed thatthe stations are placed symmetrically with respect to the domain coordinates, as illustratedfor the staggered configuration in Figure 1 with the stations at the center of the cells onthe surface. With respect to the magnetic kernel, Bruun and Nielsen [2007] demonstrated G ( r ) can also exhibit BTTB structure, but they did not use the standard computation ofthe magnetic kernel integral as described in Rao and Babu [1991]. On the other hand, athorough derivation of the
BTTB structure for G ( r ) using the approach of Rao and Babu[1991] has been given in Hogue et al. [2019]. That analysis also considered for the first timethe use of the padding for the domain and the modifications required in the generation ofthe required entries in the matrix G ( r ) . It should be noted, as shown in Hogue et al. [2019],that regardless of whether operations with G are implemented using the or by directmultiplication, it is far faster to generate G taking advantage of the BTTB structure. Here,we are concerned with efficient stabilized inversion of potential field data using this
BTTB structure, and thus refer to A for a brief discussion of the implementation of the needed perations using G when implemented using the , and point to Hogue et al. [2019] forthe details.2.2. Stabilized Inversion.
The solution of (1) is an ill-posed problem; even if G is well-conditioned the problem is underdetermined because m (cid:28) n . There is a considerable lit-erature on the solution of this ill-posed problem and we refer in particular to Vatankhahet al. [2020b] for a relevant overview, and specifically the use of the unifying framework fordetermining an acceptable solution of (1) by stabilization. Briefly, here we estimate m ∗ as the minimizer of the nonlinear objective function Φ α ( m ) subject to bound constraints m min ≤ m ≤ m max (4) m ∗ = arg min m min ≤ m ≤ m max { Φ α ( m ) } = arg min m min ≤ m ≤ m max { Φ d ( m ) + α Φ S ( m ) } . Here α is a regularization parameter which trades off the relative weighting of the two termsΦ d ( m ) and Φ S ( m ), which are respectively the weighted data misfit and stabilizer, given by(5) Φ d ( m ) = (cid:107) W d ( G m − d obs ) (cid:107) , and Φ S ( m ) = (cid:107) W h W z W L D ( m − m apr ) (cid:107) . The weighting matrices W d , W h , W z and W L are all diagonal, with dimensions that dependon the size of D , which can be used to yield an approximation for a derivative. Here, whilewe assume throughout that D = I n × n and refer to [Vatankhah et al., 2020b, Eq. (5)] for themodification in the weighting matrices that is required for derivative approximations using D , we present this general formulation in order to place the work in context of generalizedTikhonov inversion. We also use m apr = , but when initial estimates for the parameterare available, perhaps from physical measurements, note that these can be incorporated into m apr as an initial estimate for m . The weighting matrix W d has entries ( W d ) ii = 1 /σ i where we suppose that the measured data can be given by d obs = d exact + η , where d exact is the exact but unknown data, and η is a noise vector drawn from uncorrelated Gaussiandata with variance components σ i .Whereas stabilizer matrix W L in W = W h W z W L depends on m , W h and W z areconstant hard constraint and constant depth weighting matrices. Although W h can be usedto impose specific known values for entries of m , as discussed in [Boulanger and Chouteau,2001], we will use W h = I n × n . Depth weighting W z is routinely used in the context ofpotential field inversion and is imposed to counteract the natural decay of the kernel withdepth. With the same column structure as for G , W z = blockdiag ( W z(1) , . . . , W z( n z ) )where W z( r ) = ( . z r + z r − )) − β I n r × n r , . z r + z r − ) is the average depth for depth level r ,and β is a parameter that depends on the data set, [Li and Oldenburg, 1996]. Now, diagonalmatrix W L depends on the parameter vector m via i th entry given by(6) ( W L ) ii = (cid:0) ( m i − ( m apr ) i ) + (cid:15) (cid:1) λ − , i = 1 . . . n, where parameter λ determines the form of the stabilization, and focusing parameter 0 <(cid:15) (cid:28) λ = 1 which yields an approximation tothe L norm as described in [Wohlberg and Rodr´ıguez, 2007], and is preferred for inversionof potential field data, although we note that the implementation makes it easy to switch to λ = 0, yielding a solution which is compact, or λ = 2 for a smooth solution. Based on priorstudies we use (cid:15) = 1 e −
9, [Vatankhah et al., 2017]. We use I n × n to denote the identity matrix of size n × n . .3. Numerical Solution.
We first reiterate that (4) is only nonlinear in m through thedefinition of W L . Supposing that W L is constant and that null( W d G ) ∩ null( W ) = ∅ , thenthe solution m ∗ of (4) without the bound constraints is given analytically by(7) m = m apr + ( G T W d T W d G + α W T W ) − G T W d T W d ( d obs − G m apr ) . Equivalently, assuming that W is invertible, and defining ˜ G = W d G W − , ˜ r = W d ( d obs − G m apr ) and y = m − m apr , then y solves the normal equations(8) y = W − ( ˜ G T ˜ G + α I ) − ˜ G T ˜ r , and m ∗ can be found by restricting y + m apr to lie within the bound constraints.Now (8) can be used to obtain the iterative solution for (4) using the iteratively reweightedleast squares ( IRLS ) as described in Vatankhah et al. [2020b]. Specifically, we use superscript k to indicate a variable at an iteration k , and replace α by α ( k ) , W L by matrix W ( k )L withentries ( W ( k )L ) ii = (cid:16) ( m ( k − i − m ( k − i ) + (cid:15) (cid:17) λ − and m − m apr by m − m ( k − , initializedwith W L(1) = I , and m (0) = m apr respectively. Then y ( k ) is found as the solution of thenormal equations (8), and m ( k ) is the restriction of y ( k ) + m ( k − to the bound constraints.This use of the IRLS algorithm for the incorporation of the stabilization term Φ S contraststhe implementation discussed in [Zhang and Wong, 2015] for the inversion of potential fieldgravity data. In their presentation, they considered the solution of the general smoothingTikhonov formulation described by (8) for general fixed smoothing operator D replacing W L . For the solver, they used the re-weighted regularized conjugate solver for iterations toimprove m ( k ) from m apr . They also included a penalty function to impose positivity in m ( k ) ,depth weighting to prevent the accumulation of the solution at the surface, and adjustmentof α with iteration k to encourage decrease in the data fit term. Moreover, they showedthat it is possible to pick approximations D which also exhibit BTTB structure for each depthlayer, so that D x can also be implemented by layer using the 2DFFT. Although we do notconsider the unifying stabilization framework here with general operator D as described inVatankhah et al. [2020b], it is a topic for future study, and a further extension of the work,therefore, of Zhang and Wong [2015] for the more general stabilizers. In the earlier work ofthe use of the BTTB structure arising in potential field inversion, Bruun and Nielsen [2007]investigated the use of a truncated
SVD and the conjugate gradient least squares methodfor the minimization of the data fit term without regularization. They also considered thedirect solution of the constant Tikhonov function with W L = I and a fixed regularizationparameter α , for which the solution uses the filtered SVD in the small scale. Here, not onlydo we use the unifying stabilization framework, but we also estimate α at each iteration ofthe IRLS algorithm. The
IRLS algorithm is implemented with two different solvers that yieldeffective approximations of a truncated SVD. One is based on the randomized singular valuedecomposition (
RSVD ), and the second uses the Golub Kahan Bidiagonalization (
GKB ).2.4.
Algorithmic Details.
The
IRLS algorithm relies on the use of an appropriate solverfor finding y ( k ) as the solution of the normal equations (8) for each update k , and a methodfor estimating the regularization parameter α ( k ) . While any suitable computational schemecan be used to update m ( k ) , the determination of α ( k ) automatically can be challenging. Butif the solution technique generates the SVD for ˜ G ( k ) , or an approximation to the SVD , suchas by use of the
RSVD or GKB factorization for ˜ G ( k ) , then there are many efficient techniques hat can be used such as the unbiased predictive risk estimator ( UPRE ) or generalized crossvalidation (
GCV ). The obtained estimate for α ( k ) depends on the estimator used and there isextensive literature on the subject, e.g. Hansen [2010]. Thus, here, consistent with earlierstudies on the use of the GKB and
RSVD for stabilized inversion, we use the
UPRE , denoted by U ( α ), for all iterations k >
1, and refer to Renaut et al. [2017] and Vatankhah et al. [2018,2020a] for the details on the
UPRE . The
GKB and
RSVD algorithms play, however, a larger rolein the discussion and thus for clarity are given here as Algorithms 1 and 2, respectively.For the use of the
GKB we note that Algorithm 1 uses the factorization ˜ GA t p = H t p +1 B t p ,where A t p ∈ R n × t p and H t p +1 ∈ R m × t p +1 . Steps 6 and 11 of Algorithm 1 apply the modifiedGram-Schmidt re-orthogonalization to the columns of A t p and H t p +1 , as is required to avoidthe loss of column orthogonality. This factorization is then used in Step 15 to obtain the rank t p approximate SVD given by ˜ G = ( H t p +1 U t p )Σ t p ( A t p V t p ) T . The quality of this approximationdepends on the conditioning of ˜ G , [Paige and Saunders, 1982]. In particular, the projectedsystem of the GKB algorithm inherits the ill-conditioning of the original system, rather thanjust the dominant terms of the full
SVD expansion. Thus, the approximate singular valuesinclude dominant terms that are good approximations to the dominant singular values of theoriginal system, as well as very small singular values that approximate the tail of the singularspectrum of the original system. The accuracy of the dominant terms increases quickly withincreasing t p , Paige and Saunders [1982]. Therefore, to effectively regularize the dominantspectral terms from the rank t p approximation, in Step 16 we use the truncated UPRE thatwas discussed and introduced in Vatankhah et al. [2017]. Specifically, a suitable choice for α ( k ) is found using the truncated SVD of B t p with t terms. Then, in Step 17, y ( k ) is foundusing all terms in the expansion of B t p . The matrix Γ( α, Σ) in Step 17 is the diagonal matrixwith entries σ i / ( σ i + α ). In our simulations we use t p = floor (1 . t ) corresponding to 5%increase in the space obtained. This contrasts to using just t terms and will include termsfrom the tail of the spectrum. Note, furthermore, that the top t terms, from the projectedspace of size t p > t will be more accurate estimates of the true dominant t terms than ifobtained with t p = t . Effectively, by using a 5% increase of t in the calculation of t p , weassume that the first t terms from the t p approximation provide good approximations of thedominant t spectral components of the original matrix ˜ G . We reiterate that the presentedalgorithm depends on parameters t p and t . At Step 16 in Algorithm 1 α ( k ) is found using theprojected space of size t but the update for y in Step 17 uses the oversampled projected spaceof size t p . The results presented for the synthetic tests will demonstrate that this uniformchoice for t p is a suitable compromise between taking t p too small and contaminating thesolutions by components from the less accurate approximations of the small components,and a reliable, but larger, choice for t p that provides a good approximation of the dominantterms within reasonable computational cost.The algorithm presented in Algorithm 2, denoted as RSVD , includes a single power iterationin Steps 3 to 6. Without the use of the power iteration in the
RSVD it is necessary to uselarger projected systems in order to obtain a good approximation of the singular space ofthe original system, Halko et al. [2011]. Further, it was shown in [Vatankhah et al., 2020a],that when using
RSVD for potential field inversion, it is better to apply a power iteration. lgorithm 1: Use
GKB algorithm for factorization ˜ GA t p = H t p +1 B t p and obtain solution y of (8). Input: ˜ r ∈ R m , ˜ G ∈ R m × n , a target rank t and size of oversampled projected problem t p , t < t p (cid:28) m . Output: α and y . Set a = zeros ( n, B = sparse ( zeros ( t p + 1 , t p )), H = zeros ( m, t p + 1), A = zeros ( n, t p ); Set β = (cid:107) ˜ r (cid:107) , h = ˜ r /β , H (: ,
1) = h ; for i = 1 : t p do b = ˜ G T h − β a ; for j = 1 : i − do b = b − ( A (: , j ) T b ) A (: , j ) (modified Gram-Schmidt ( MGS )) end γ = (cid:107) b (cid:107) , a = b /γ , B ( i, i ) = γ , A (: , i ) = a ; c = ˜ G a − γ h ; for j = 1 : i do c = c − ( H (: , j ) T c ) H (: , j ) ( MGS ) end β = (cid:107) c (cid:107) , h = c /β , B ( i + 1 , i ) = β , H (: , i + 1) = h ; end SVD for sparse matrix: U t p Σ t p V Tt p = svds ( B, t p ); Apply
UPRE to find α using U t p (: , t ) and Σ t p (1 : t, t ); Solution y = (cid:107) ˜ r (cid:107) A t p V t p Γ( α, Σ t p ) U t p (1 , :) T ;Skipping the power iteration steps leads to a less accurate approximation of the dominantsingular space. Moreover, the gain from taking more than one power iteration is insignificantas compared to the increased computational time required. As with the GKB , the
RSVD , withand without power iteration, depends on two parameters t and t p , where here t is the targetrank and t p is size of the oversampled system, t p > t . For given t and t p the algorithm usesan eigen decomposition with t p terms to find the SVD approximation of ˜ G with t p terms.Hence, the total projected space is of size t p , the size of the oversampled system, which isthen restricted to size t for estimating the approximation of ˜ G . It is clear that the
RSVD and
GKB algorithms provide approximations for the spectralexpansion of ˜ G , with the quality of this approximation dependent on both t and t p , andhence the quality of the obtained solutions y ( k ) at a given iteration is dependent on thesechoices for t and t p . As noted, the GKB algorithm inherits the ill-conditioning of ˜ G but the RSVD approach provides the dominant terms, and is not impacted by the tail of the spectrum.Thus, we may not expect to use the same choices for the pairs t and t p for these algorithms.Vatankhah et al. [2020a] investigated the choices for t and t p for both gravity and magnetickernels. When using RSVD with the single power iteration they showed that suitable choices We note that using ( Y + Y T ) / Y , assures that the matrix issymmetric which is important for the efficiency of eig . lgorithm 2: Use
RSVD with one power iteration to compute an approximate
SVD of ˜ G and obtain solution y of (8) Input: ˜ r ∈ R m , ˜ G ∈ R m × n , a target matrix rank t and size of oversampled projectedproblem t p , t < t p (cid:28) m . Output: α and y . Generate a Gaussian random matrix Ω ∈ R t p × m ; Y = Ω ˜ G ∈ R t p × n ; [ Q, ∼ ] = qr ( Y T , Q ∈ R n × t p . (economic QR decomposition) ; Y = ˜ GQ ∈ R m × t p ; [ Q, ∼ ] = qr ( Y, Q ∈ R m × t p ; Y = Q T ˜ G , Y ∈ R t p × n ; [ Q, ∼ ] = qr ( Y T , Q ∈ R n × t p ; B = ˜ GQ ∈ R m × t p ; Compute Y = B T B ∈ R t p × t p ; Eigen-decomposition of B T B : [ ˜ V , D ] = eig (( Y + Y T ) /
2) ; S = diag ( (cid:112) | real ( D ) | ), [ S, indsort ] = sort ( S, (cid:48) descend (cid:48) ); ˜Σ t = diag ( S (1 : t )), ˜ V = ˜ V (: , indsort (1 : t )), ˜ U = ˜ V ./ ( S (1 : t ) T ); Apply
UPRE to find α using ˜ U , ˜Σ t , and B T ˜ r ; Solution y = Q ˜ V Γ( α, ˜Σ t ) ˜ U T ( B T ˜ r ); Note if we form ˜ V t = Q ˜ V ; and ˜ U t = B ˜ U ˜Σ − t , then ˜ U t ˜Σ t ˜ V Tt is a t -rank approximation ofmatrix ˜ G ;for t , when t p = t + 10, are t (cid:38) m/s , where s ≈ s ≈ s ≈ s ≈ t p (cid:38) m/s where s (cid:46)
20 for the inversion of gravitydata using the
GKB algorithm. This leads to the range of t used in the simulations to bediscussed in Section 3. We use the choices s = 40, 25, 20, 8, 6, 4 and 3. This permits a viablecomparison of cost and accuracy for GKB and
RSVD . Observe that, for the large scale casesconsidered here, we chose to test with least s = 3 rather than s = 2. Indeed, using s = 2generates a large overhead of testing for a wide range of parameter choices, and suggeststhat we would need relatively large subspaces defined by t = m/
2, offering limited gain inspeed and computational cost.2.5.
Computational Costs.
Of interest is the computational cost of (i) the practical im-plementations of the
GKB or RSVD algorithms for finding the parameter vector y ( k ) whenoperations with matrix G are implemented using the , and (ii) the associated impactof the choices of t p on the comparative costs of these algorithms with increasing m and n . Inthe estimates we focus on the dominant costs in terms of flops , recalling that the underlyingcost of a dot product of two vectors of length m is assumed to be 2 m . Further, the costsignore any overheads of data movement and data access.First, we address the evaluation of matrix products with ˜ G or ˜ G T required at Steps 4 and9 of Algorithm 1 and 2, 4, 6 and 8 of Algorithm 2. Matrix operations with G , rather than ˜ G , se the , as described in A for G x , G T y and y T G , based on the discussion in [Vogel,2002]. The cost of a single matrix vector operation in each case is 4 n x n y n z log (4 n x n y ) =4 n log (4 n r ). This includes the operation of the on the reshaped components of x r ∈R n x n y and the inverse of the component-wise product of ˆ x r with ˆ G ( r ) , for r = 1 : n z ,but ignores the lower cost of forming the component-wise products and summations overvectors of size n r . Thus, multiplication with a matrix of size n × t p has dominant cost(9) 4 nt p log (4 n r ) , in place of 2 mnt p . In the IRLS algorithm we need to use operations with ˜ G = W d G W − rather than G . But this is handled immediately by using suitable component-wise multipli-cations of the diagonal matrices and vectors. Specifically,(10) ˜ G x = W d ( G ( W − x ))and the is applied for the evaluation of G w where w = W − x . Then, given z = G w ,a second component-wise multiplication, W d z , is applied to complete the process. Withinthe algorithms, matrix-matrix operations are also required but, clearly, operations ˜ G ( k ) X ,( ˜ G ( k ) ) T Z , Z T ˜ G ( k ) are just loops over the relevant columns (or rows) of the matrices X and Z ,with the appropriate weighting matrices provided before and after application of the .The details are provided in A.Now, to determine the impact of the choices for t (and t p ) we estimate the dominant costsfor finding the solution of (8) using the GKB and
RSVD algorithms. This is the major cost ofthe
IRLS algorithm. The assumptions for the dominant costs of standard algorithms, givenin Table 2, are quoted from Golub and Van Loan [2013]. But note that the cost for eig depends significantly on problem size and symmetry. Here t can be quite large, when m islarge, but the matrix is symmetric, hence we use the estimate 9 t , [Golub and Van Loan,2013, Algorithm 8.3.3]. To be complete we note that svds for the sparse bidiagonal matrix B is achieved at cost which is at most quadratic in the variables. A comment on the cost ofthe qr operation is also required. Generally, in forming the QR factorization of a matrix wewould maintain the information on the Householder reflectors that are used in the reductionof the matrix to upper triangular form, rather than accumulating the matrix Q . The costis reduced significantly if Q is not accumulated. But, as we can see from Steps 2, 4, 6 and8 of Algorithm 2, we will need to evaluate products of Q with ˜ G or its transpose. To takeadvantage of the we then need to first evaluate a product of Q with a diagonal scalingmatrix, which amounts to accumulation of matrix Q . Experiments, that are not reportedhere, show that it is more efficient to accumulate Q as given in Algorithm 2, rather than toto first evaluate the product of Q with a diagonal scaling matrix without pre accumulation.Then, the cost for accumulating Q is 2 t ( m − t/
3) for a matrix of size m × t , [Golub andVan Loan, 2013, page 255] yielding a total cost for the qr step of 4 t ( m − t/ t p or t , noting t p = floor (1 . t )and t = m/s . We also ignore the distinction between m and n r , where n r > m for paddeddomains. Moreover, the cost of finding α ( k ) and then evaluating y ( k ) is of lower order thanthe dominant costs involved with finding the needed factorizations. Using LOT to indicatethe lower order terms that are ignored, and assuming the calculation without the use of the X G T Y svds ( B ) MGS ( C ) eig ( A T A ) [ Q, ∼ ] = qr ( Z )2 mnt mnt t ( m + t ) 2 mt t t ( m − t/ Table 1.
Computational costs for standard operations. Matrix G ∈ R m × n , X ∈ R n × t , Y ∈ R m × t , sparse bidiagonal B ∈ R t +1 × t , A T A ∈ R t × t , and Z ∈ R m × t . The modified Gram-Schmidt for C ∈ R m × i is repeated for i = 1 : t ,yielding the given estimate. These costs use the basic unit that the innerproduct x T x for x of length n requires 2 n operations. , the most significant terms yield CostG
GKB = 4 nmt + 2 t ( n + m ) + LOT (11)
CostG
RSVD = 8 nmt + 4 t (2 n + m − t ) + 2 mt + 9 t + LOT = 8 nmt + 4 t (2 n + 3 / m ) + 5 t + LOT . (12)When using the , the first two entries 2 mnt in Table 1 are replaced by 4 nt log (4 n r ).Then, using m ≈ n r , it is just the first term in each estimate that is replaced leading to thecosts with the as Cost
GKB = 8 nt log (4 m ) + 2 t ( n + m ) + LOT (13)
Cost
RSVD = 16 nt log (4 m ) + 4 t (2 n + 3 / m ) + 5 t + LOT . (14)Both pairs of equations suggest, just in terms of flop count, that Cost
RSVD > Cost
GKB .Thus, we would hope to use a smaller t for the RSVD , than for the
GKB , in order to obtaina comparable cost. This expectation contradicts earlier experiments contrasting these algo-rithms for the inversion of gravity data, using the
RSVD without power iteration, as discussedin Vatankhah et al. [2018]. Alternatively, it would be desired that the
RSVD should convergein the
IRLS far faster than the
GKB . Further, theoretically, the gain of using the isthat the major terms are 8 t n and 2 t n for the RSVD and
GKB , respectively. as compared to8 nmt > t n and 4 mnt > t n , noting t < m . Specifically, even though the costs should goup with order nt eventually with the , this is still far slower than the increase mnt that arises without taking advantage of the structure.Now, as discussed in Xiang and Zou [2013], measuring the computational cost just interms of the flop count can be misleading. It was noted by Xiang and Zou [2013] thata distinction between the GKB and
RSVD algorithms, where the latter is without the poweriteration, is that the operations required in the
GKB involve many
BLAS2 (matrix-vector)operations, requiring repeated access to the matrix or its transpose, as compared to
BLAS3 (matrix-matrix) operations for
RSVD implementations. On the other hand, within the qr algorithm, the Householder operations also involve BLAS2 operations. Hence, when using
Matlab , the major distinction should be between the use of functions that are builtin and compiled, or are not compiled. In particular, the functions qr and eig are builtin and hence optimized, but all other operations that are used in the two algorithms do notuse any compiled code. Specifically, there is no compiled option for the MGS used in steps 6and 11 of Algorithm 1, while almost all operations in Algorithm 2 use builtin functions or
BLAS3 operations for matrix products that do not involve the matrices with
BTTB structure.Thus, in the evaluation of the two algorithms in the
Matlab environment, we will considercomputational costs directly, rather than just the estimates given by (13) -(14). On the ther hand, the estimates of the flop counts should be relevant for higher-level programmingenvironments, and are thus relevant more broadly. We also note that in all implementationsnone of the results quoted will use multiple cores or GPUs.3. Numerical Experiments
We now validate the fast and efficient methods for inversion of potential field data usingthe
BTTB structure of the gravity and magnetic kernel matrices.3.1.
Implementation parameter choices.
Diagonal depth weighting matrix W z uses β = 0 . β = 1 . W d is determined by the noise in the data, and hard constraint matrix W h is taken to bethe identity. Moreover, we use m apr = 0, indicating no imposition of prior information onthe parameters. Regularization parameter α ( k ) is found using the UPRE method for k > α (1) given by(15) α (1) = (cid:16) nm (cid:17) . σ mean ( σ i ) . Here σ i are the estimates of the ordered singular values for W d G W − given by the use of the RSVD or GKB algorithm, and the mean value is taken only over σ i >
0. This follows the practiceimplemented in Vatankhah et al. [2018]; Renaut et al. [2017] for studies using the
RSVD and
GKB , and which was based on the recommendation to use a large value for α (1) , [Farquharsonand Oldenburg, 2004]. In order to contrast the performance and computational cost of the RSVD and
GKB algorithms with increasing problem size m , different sizes t of the projectedspace for the solution are obtained using t = floor ( m/s ). Generally, the GKB is successfulwith larger values for s (smaller t ) as compared to that needed for the RSVD algorithm.Hence, following recommendations for both algorithms, as discussed in Section 2.4, we usethe range of s from 40 to 3, given by s = 40, 25, 20, 8, 6, 4 and 3, corresponding to increasing t , but also limited by 5000.For all simulations, the IRLS algorithm is iterated to convergence as determined by the χ test for the predicted data,(16) (cid:107) W d ( G m ( k ) − d obs ) (cid:107) ≤ m + √ m, or(17) (cid:107) W d ( G m ( k ) − d obs ) (cid:107) m + √ m ≤ . If this is not attained for k ≤ K max , the iteration is terminated. Noisy data are generatedfor observed data d obs = d exact + η using(18) η i = ( τ | ( d exact ) i | + τ (cid:107) d exact (cid:107) ∞ ) e i where e is drawn from a Gaussian normal distribution with mean 0 and variance 1. Thepairs ( τ , τ ) are chosen to provide a signal to noise ratio ( SNR ), as calculated by(19)
SNR = 20 log (cid:107) d exact (cid:107) (cid:107) d obs − d exact (cid:107) , hat is approximately constant across the increasing resolutions of the problem. Recordedfor all simulations are (i) the values of the relative error RE ( k ) , as defined by(20) RE = (cid:107) m exact − m ( k ) (cid:107) (cid:107) m exact (cid:107) , (ii) the number of iterations to convergence K which is limited to 25 in all cases, (iii) thescaled χ estimate given by (17) at the final iteration, and (iv) the time to convergencemeasured in seconds, or to iteration 25 when convergence is not achieved.3.2. Synthetic data.
For the validation of the algorithms, we pick a volume structure witha number of boxes of different dimensions, and a six-layer dipping dike. The same structureis used for generation of the gravity and magnetic potential field data. For gravity datathe densities of all aspects of the structure are set to 1, with the homogeneous backgroundset to 0. For the magnetic data, the dipping dike, one extended well and one very smallwell have susceptibilities .
06. The three other structures have susceptibilities set to . (a) Iso-surface of the volume structure. (b) Cross-section of the volume structure. Figure 2.
The basic volume structure within the domain of size 2000 × × x , y and z into the number of blocks as indicated by triples( s x , s y , n z ) with increasing resolution for increasing values of these triples. They are generatedby taking ( s x , s y , n z ) = (25 , , (cid:96) ≥ s x s y = 375 is scaled by (cid:96) with increasing (cid:96) , yielding aminimum problem size with m = 6000 and n = 48000. The grid sizes are thus given bythe triples (∆ x , ∆ y , ∆ z ) = (2000 /s x , /s y , /n z ). The problem sizes considered for eachsimulation are detailed in Table 2. For padding we compare the case with pad = 0% andpad = 5% padding across x and y dimensions. These are rounded to the nearest integeryielding p x L = p x R = round (pad s x ), and n x = s x + 2 round (pad s x ). n y is calculated in the ame way, yielding n = ( s x + 2 round (pad s x ))( s y + 2 round (pad s y )) n z . Certainly, the choiceto use pad = 5% is quite large, but is chosen to demonstrate that the solutions obtainedusing the are robust to boundary conditions, and thus not impacted by the restrictiondue to lack of padding or very small padding.For these structures and resolutions, noisy data are generated as given in (18) to yield an SNR of approximately 24 across all scales as calculated using (19). This results in differentchoices of τ and τ for each problem size and dependent on the gravity or magnetic datacase, denoted by ( τ g1 , τ g2 ) and ( τ m1 , τ m2 ), respectively. In all cases we use τ g1 = τ m1 = .
02 andadjust τ . The simulations for the choices of τ g2 and τ m2 for increasing problem sizes aredetailed in Table 1. As an example we illustrate the true and noisy data for gravity andmagnetic data, when (cid:96) = 12, in Figure 3. (cid:96) ( s x , s y , n z ) m n n pad τ g2 τ m2 SNR g SNR m , ,
8) 6000 48000 58080 . . . .
05 (125 , ,
10) 9375 93750 113710 . . . .
06 (150 , ,
12) 13500 162000 199200 . . . .
07 (175 , ,
14) 18375 257250 310730 . . . .
08 (200 , ,
16) 24000 384000 464640 . . . .
19 (225 , ,
18) 30375 546750 662450 . . . .
010 (250 , ,
20) 37500 750000 916320 . . . .
011 (275 , ,
22) 45375 998250 1206500 . . . .
012 (300 , ,
24) 54000 1296000 1568160 . . . . Table 2.
Dimensions of the volume used in the experiments with scalingof the small problem size (25 , ,
2) by scale factor (cid:96) in each dimension. m and n are the dimensions of the measurement vector and the volume domain,respectively, G ∈ R m × n . Here m = s x s y = 375 (cid:96) and n = mn z where n x = s x and n y = s y without padding. Here, we use n pad = n x n y n z to denote thevolume dimension n with 5% padding, using n x = s x + 2 round (pad s x ) and n y = s y + 2 round (pad s y ) for padding obtained using a percentage, pad, oneach side of the domain so that p x L = p x R = round (pad s x ), and similarly for s y . .3.3. Numerical Results.
The validation and analysis of the algorithms for the inversionof the potential field data is presented in terms of (i) the cost per iteration of the algorithm(Section 3.3.1), (ii) the total cost to convergence of the algorithm (Section 3.3.2), and (iii)the quality of the obtained solutions, (Section 3.3.3). Supporting quantitative data thatsummarize the illustrated results are presented as Tables in B.3.3.1.
Comparative cost of
RSVD and
GKB algorithms per
IRLS iteration.
We investigate thecomputational cost, as measured in seconds, for one iteration of the inversion algorithmusing both the direct multiplications using matrix G , respectively, G T , and the circulantembedding, for the resolutions up to (cid:96) = 6 that are indicated in Table 2, using both the RSVD and
GKB algorithms, and for both gravity and magnetic data. For fair comparison, allthe timing results that are reported use
Matlab release 2019b implemented on the sameiMac 4 . a) True gravity anomaly (b) Noisy gravity anomaly(c) True magnetic anomaly (d) Noisy magnetic anomaly Figure 3.
The calculated true and noisy anomalies for the volume structuregiven in Figure 2(a), where the units are mGal and nT for gravity and magneticdata, respectively. The anomalies used for the inversion using the paddeddomain are exactly the same as given here.the matrix G is too large for effective memory usage when (cid:96) >
6. The details of the timingresults for one step of the
IRLS algorithm are illustrated in Figures 4-7, with the specificvalues for the magnetic data case, given in Table 4.Figure 4 provides an overview of the computational cost with increasing projection size t , for a given m , when the algorithm is implemented using G directly, or using the .These costs exclude the cost of generating G . In these plots, we use the open symbols forcalculations using G and solid symbols when using the . The same symbols are used foreach choice of t and (cid:96) . An initial observation, confirming expectation, is that the timings forequivalent problems and methods, are almost independent of whether the potential field dataare gravity or magnetic, comparing Figures 4(a)-4(b) with Figures 4(c) and 4(d). The lackof entries for triple [175 , ,
14] indicates that the matrix G is too large for the operations, (cid:96) = 7. With increasing (cid:96) , (increasing values of the triples along the x − axis), it can also be (a) Running time magnetic : GKB . [100,60,8] [125,75,10] [150,90,12] [175,105,14]10 (b) Running time magnetic : RSVD . [100,60,8] [125,75,10] [150,90,12] [175,105,14]10 (c) Running time gravity : GKB . [100,60,8] [125,75,10] [150,90,12] [175,105,14]10 (d) Running time gravity : RSVD . Figure 4.
Running time in seconds for one iteration of the inversion algo-rithm for the inversion of magnetic and gravity data, without padding thevolume domain. Problems are of increasing size, as indicated by the x − axisfor triples [ n x , n y , n z ] and increasing projection size t ( y − axis using log scale)determined by fractions of m = s x s y . In Figures 4(a) and 4(c) the runningtime for the GKB algorithm using t p = floor (1 . t ) (an oversampling percent-age 5%), for the magnetic and gravity problems respectively. In Figures 4(b)and 4(d) the equivalent running times using the RSVD algorithm with one poweriteration. In these plots the solid symbols represent the timing for one iter-ation of the algorithm using the and the open symbols represent thetiming for the same simulation using G directly. Matrix G for problem size (cid:96) = 7, which corresponds to triple [175 , , G directly are more expensive for problems at these resolutions.In Figure 5 we plot the relative computational costs for one iteration of the IRLS algo-rithm using the matrix G as compared to the algorithm using the , as indicated by /40 m/25 m/20 m/8 m/6 m/4 m/310 (a) magnetic data: Cost G / Cost . m/40 m/25 m/20 m/8 m/6 m/4 m/310 (b) gravity data: Cost G / Cost . Figure 5.
Relative computational cost for one iteration of the
IRLS algorithmusing G directly as compared to the , as indicated by Cost G / Cost ,for the data presented in Figure 4, for the magnetic and gravity problems,Figures 5(a)-5(b). Here, the values for the relative cost that are less than 1,below the horizontal line at y = 1, indicate that it is more efficient to use G directly. Values that are greater than 1 indicate that it is more efficient to usethe . Open symbols indicate the GKB algorithm and solid symbols the
RSVD algorithm. In each case the given plots for a fixed (cid:96) are for increasingprojection size t as given by m/s for the selections of t as used in Figure 4. Cost G / Cost , for the data presented in Figure 4. Along the x − axis we give the size t used for the projected problem in terms of the ratio m/s . The lines with solid blue sym-bols are for results using the RSVD algorithm, and the open black symbols are for the
GKB algorithm. Here, the values for the relative cost that are less than 1, below the horizontalgreen line at y = 1, indicate that for the specific algorithm it is more efficient to use G directly. Values that are greater than 1 indicate that it is more efficient to use the for the given algorithm and problem size. It is apparent that it is not beneficial to use the for the smaller scale implementation of the RSVD algorithm, when (cid:96) = 4 or 5. But thesituation is completely reversed using the
GKB algorithm for all choices of (cid:96) and the
RSVD algorithm for (cid:96) ≥
6. Thus, the relative gain in reduced computational cost, by using the depends on the algorithm used within the
IRLS inversion algorithm. The decrease inefficiency for a given size problem, fixed (cid:96) but increasing size t (in the x − axis), is explainedby the theoretical discussion relating to equations (13)-(14). As t increases the impact of theefficient matrix multiplication using the is reduced. Again the gravity and magneticdata results are comparable.Figure 4 provides no information on the relative costs of the GKB and
RSVD algorithms withincreasing (cid:96) , independent of the use of the . Figure 6 shows the relative computationalcosts,
Cost
GKB / Cost
RSVD . Note that Figure 6(a) also includes results for larger problems.These plots demonstrate that the relative costs for a single iteration are not constant acrossall t with the GKB generally cheaper for smaller t , and the RSVD cheaper for larger t . Theseresults confirm the analysis of the computational cost in terms of flops provided in (13)-(14) /40 m/25 m/20 m/8 m/6 m/4 m/30.511.522.5 (a) magnetic data: Cost
GKB / Cost
RSVD . m/40 m/25 m/20 m/8 m/6 m/4 m/30.511.522.5 (b) gravity data: Cost
GKB / Cost
RSVD . Figure 6.
The relative computational cost for one iteration of the
IRLS al-gorithm for inversion using the
GKB as compared to the
RSVD algorithm(
Cost
GKB / Cost
RSVD ), for given (cid:96) and projected size t . In each case the givenplots for a fixed (cid:96) are for increasing projection size t as given by m/s as inFigure 5. The horizontal line at y = 1 represents the data for which the costsare the same, independent of whether using the RSVD or GKB algorithms. The
GKB is more efficient when t is maintained small, s = 40, 25 and 20. The gainin using the GKB decreases, however, as (cid:96) increases. For small (cid:96) and t , theestimates confirm the computational cost estimates in (13)-(14), but for largerprojection sizes t , the RSVD is more efficient. In Figure 6(a) the relative costsare also included for (cid:96) = 8 and (cid:96) = 9, where t ≤ t . The relative computational costs increase from roughly 0 . .
5, increasingwith both (cid:96) and t . Still, this improved relative performance of RSVD with increasing (cid:96) and t appears to violate the flop count analysis in (13)-(14). As discussed in Section 2.5, thisis a feature of the implementation. While RSVD is implemented using the
Matlab builtin function qr which uses compiled code for faster implementation, GKB only uses builtin operations for performing the
MGS reorthogonalization of the basis matrices A t p and H t p .Once again results are comparable for inversion of both gravity and magnetic data sets.Figure 7 summarizes magnetic data timing results from Table 4 for domains which arepadded with 5% padding in x and y directions. Data illustrated in Figures 7(a)-7(b) areequivalent to the results presented in Figures 4(a)-4(b), but with padded volume domains.Again these results show the open symbols are more spread out vertically, for increasing (cid:96) ,confirming that the algorithms using G directly are more expensive for problems at theseresolutions, with greater impact when using the GKB algorithm for small (cid:96) . This is furtherconfirmed in Figure 7(c), equivalent to Figure 5(a), showing that the computational costof performing one step of the
IRLS algorithm using matrix G directly, is always greaterthan that using the . This is more emphasized for the GKB algorithm. The relativecosts shown in Figure 7(d), equivalent to Figure 6(a), again shows that the
GKB algorithmis cheaper for small t when (cid:96) is small. But as the problem size increases and the projected (a) Running time (padded): GKB . [110,70,8] [135,85,10] [160,100,12] [185,115,14]10 (b) Running time (padded): RSVD . m/40 m/25 m/20 m/8 m/6 m/4 m/310 (c) Cost G / Cost (Padded). m/40 m/25 m/20 m/8 m/6 m/4 m/30.511.522.53 (d)
Cost
GKB / Cost
RSVD (Padded).
Figure 7.
In Figures 7(a)-7(b) the running time in seconds for one iterationof the inversion algorithm for the inversion of magnetic data, for the sameproblems as in Figure 4(a) and 4(b) but with padding, pad = 5%, addedto the volume domain. Problems are of increasing size, as indicated by the x − axis for triples [ n x , n y , n z ] and increasing projection size t ( y − axis using logscale) determined by fractions of m = s x s y . In these plots the solid symbolsrepresent the timing for one iteration of the algorithm using the andthe open symbols represent the timing for the same simulation without usingthe for the kernel operations. In Figure 7(c) the relative costs for theseresults, as also provided in Figure 5(a) for the case without padding, and inFigure 7(d) the relative costs of the two algorithms with the , as inFigure 6(a) without padding.problem size also increases, it is more efficient to use the RSVD algorithm, consistent withthe observations for the unpadded domains.3.3.2.
Comparative cost of
RSVD and
GKB algorithms to convergence.
The computational costof the
IRLS algorithm for solving the inversion problem to convergence depends on the choiceof t , the choice of GKB or RSVD algorithms, and whether solving the magnetic or the gravity /40 m/25 m/20 m/8 m/6 m/4 m/300.511.52 (a) magnetic Cost GKB / Cost
RSVD . m/40 m/25 m/20 m/8 m/6 m/4 m/300.511.52 (b) gravity Cost GKB / Cost
RSVD . Figure 8.
Computational cost to convergence of the
IRLS algorithm forinversion using the
GKB as compared to the
RSVD algorithm,
Cost
GKB / Cost
RSVD ,for the magnetic and gravity problems respectively, in Figures 8(a)-8(b).problem. In Table 5 we report the timing results for the inversion of gravity and magneticdata for problems of increasing size (cid:96) and projected spaces of sizes t p . The relative totalcomputational costs to convergence, Cost
GKB / Cost
RSVD , (the last two columns in Table 5) areillustrated via Figures 8(a)-8(b), for the magnetic and gravity results, respectively. There isa distinct difference between the two problems. The results in Figure 8(a) for the magneticproblem demonstrate a strong preference for the use of the
GKB algorithm, except for large t , t = floor ( m/ RSVD algorithm is always most efficient for the solutionof the gravity problem, which is consistent with the conclusion presented in Vatankhahet al. [2018] for
RSVD without power iteration. Moreover, the data presented in Table 7 forthe gravity problem, indicate that the
RSVD algorithm generally converges more quickly andyields a smaller relative error. Furthermore, if based entirely on the calculated RE , the resultssuggest that good results can be achieved for relatively small t as compared to m , certainly s (cid:38) GKB are generally larger forcomparable choices of t .For the magnetic data, the results in Table 6 demonstrate that the RSVD algorithm gen-erally requires more iterations than the
GKB algorithm, and that the obtained relative errorsare then comparable, or slightly larger. This is then reflected in Figure 8(a) that the
GKB algorithm is most efficient. Referring back to Table 6, it is the case that the
RSVD algo-rithm often reaches the maximum number of iterations, K = 25, without convergence, when GKB has converged in less than half the number of iterations, when t is small relative to m , t = floor ( m/s ) with s = 40, 25 and 20. This verifies that the RSVD needs to take a largerprojected subspace t in order to capture the required dominant spectral space when solvingthe magnetic problem, as compared to the gravity problem, and confirms the conclusionspresented in Vatankhah et al. [2020a]. On the other hand, the use of the GKB as comparedto the
RSVD was not discussed in Vatankhah et al. [2020a]. Our results now lead to a newconclusion concerning these two algorithms for solving the magnetic data inversion problem. n particular, the results suggest that the GKB algorithm be adopted for inversion of magneticdata. Further, the results suggest that the relative error obtained using the
GKB generallydecreases with increasing t , and that it is necessary to use subspaces with t at least as large as floor ( m/ Illustrating Solutions with Increasing (cid:96) and t . We first compare a set of solutions forwhich the timing results were compared in Section 3.3.2. Figure 9 illustrates the predictedanomalies and reconstructed volumes for gravity data inverted by both algorithms, withresolutions given by (cid:96) = 4 and (cid:96) = 7 with t = floor ( m/
8) and t = floor ( m/ (cid:96) = 4 it can be seen that the predicted anomalies are generally less accuratethan with (cid:96) = 7. Moreover, there is little deterioration in the anomaly predictions whenusing t = floor ( m/
8) instead of t = floor ( m/ GKB show more residual noise. On the other hand, it is more apparent from consideration of thereconstructed volumes shown in Figures 9(i)-9(p) that the
RSVD algorithm does yield betterresults in all cases, and specifically the high resolution (cid:96) = 7 results are very good, evenusing t = floor ( m/ (cid:96) = 7 it is sufficient to use t = floor ( m/
8) and the
RSVD algorithm, butthat a reasonable result may even be obtained using the same algorithm but with (cid:96) = 4 andrequiring less than 5 minutes of compute time.The results for the inversion of the magnetic data are illustrated in Figure 10 for the samecases as for the inversion of gravity data in illustrated in Figure 9. Now, in contrast to thegravity results, the predicted anomalies are in good agreement with the true data for theresults obtained using the
GKB algorithm, with apparently greater accuracy for the lowerresolution solutions, (cid:96) = 4 for both choices of t . On the other hand, the predicted magneticanomalies are less satisfactory for small (cid:96) and t but acceptable for large (cid:96) . Then, consideringthe reconstructed volumes, there is a lack of resolution for (cid:96) = 4 which is evidenced by theloss of the small well near the surface, which is seen when (cid:96) = 7 for both cases of t , whenusing the GKB . The other structures in the domain are also resolved better with (cid:96) = 7, butthere is little gain from using t = floor ( m/
4) over t = floor ( m/ RSVD algorithm, while it is clear that the result with (cid:96) = 4 and small t is unacceptable, the anomaly and reconstructed volume with (cid:96) = 4 and t = floor ( m/
4) is acceptable and achieved in reasonable time, approximately 11 minutes,far faster than using (cid:96) = 7 with
GKB . Thus, this may contradict the conclusion that oneshould use the
GKB algorithm within the magnetic data inversion algorithm. If there is alarge amount of data and a high resolution volume is required, then it is important to use
GKB in order to limit computational cost. Otherwise, it can be sufficient to use the
RSVD provided t ≥ floor ( m/
8) for a coarser resolution solution obtained at reasonable computational cost.We now investigate the quality of solutions obtained for magnetic data using higher reso-lution data sets, and both
GKB and
RSVD algorithms to assess which algorithm is best suitedfor such larger problems. In these cases we pick t = floor ( m/ (cid:96) = 11 with t = 2268 and (cid:96) = 12 with t = 2700, corresponding to m = 45375 and n = 998250, and m = 54000 and n = 1296000, respectively, are illustrated in Figure 11.For these large scale problems, the memory becomes too large for implementation on theenvironment with just 32GB RAM. Thus, these timings are for an implementation using a
00 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (a) GKB : (cid:96) = 4, t =750, (9 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (b) (cid:96) = 4, t = 1500,(7 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (c) (cid:96) = 7, t = 2296,(11 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (d) (cid:96) = 7, t = 4593,(8 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (e) RSVD : (6 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (f) (6 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (g) (7 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) (h) (7 , GKB : ( . , . . , . . , . . , . RSVD : ( . , . . , . . , . . , . Figure 9.
For gravity data the predicted anomalies obtained using
GKB in Figures 9(a)-9(d) and
RSVD in Figures 9(e)-9(h), with the correspondingreconstructed volumes in Figures 9(i)-9(l) and Figures 9(m)-9(p), respectively.In each case the first row for
GKB indicates the choices of (cid:96) and t in eachcolumn. The choices t = 750 and t = 1500 for (cid:96) = 4, and with t = 2296and t = 4593 for (cid:96) = 7, correspond to t = floor ( m/
8) and t = floor ( m/ m, n ) = (6000 , , K, Cost s), (number of iterations to convergence and computational costin seconds) in the captions of the anomalies, and ( RE , χ / ( m + √ m )) in thecaptions of the reconstructions. Results for all cases are summarized in Table 7with timings in Table 5. The units for the anomalies are mGal.desktop computer with the Intel(R) Xeon (R) Gold 6138 CPU 2.00GHz chip and with Mat-lab release 2019b. Comparing the results between m = 45375 and m = 54000 ( (cid:96) = 11 and
00 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -300-200-1000100200300400500600700800 (a) GKB : (cid:96) = 4 , t = 750,(5 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -300-200-1000100200300400500600700800 (b) (cid:96) = 4 , t = 1500,(5 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -400-2000200400600800 (c) (cid:96) = 7 , t = 2296,(7 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -400-2000200400600800 (d) (cid:96) = 7 , t = 4593,(7 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -300-200-1000100200300400500600700800 (e) RSVD : (25 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -300-200-1000100200300400500600700800 (f) (9 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -400-2000200400600800 (g) (14 ,
200 400 600 800 1000 1200 1400 1600 1800 x-direction (m) y - d i r ec ti on ( m ) -400-2000200400600800 (h) (13 , GKB : ( . , . . , . . , . . , . RSVD : ( . , . . , . . , . . , . Figure 10.
For magnetic data the predicted anomalies obtained using
GKB inFigures 10(a)-10(d) and
RSVD in Figures 10(e)-10(h), with the correspondingreconstructed volumes in Figures 10(i)-10(l) and Figures 10(m)-10(p), respec-tively. In each case the first row for
GKB indicates the choices of (cid:96) and t ineach column. The choices t = 750 and t = 1500 for (cid:96) = 4, and with t = 2296and t = 4593 for (cid:96) = 7, correspond to t = floor ( m/
8) and t = floor ( m/ m, n ) = (6000 , , K, Cost s), (number of iterations to convergence and computational costin seconds) in the captions of the anomalies, and ( RE , χ / ( m + √ m )) in thecaptions of the reconstructions. Results for all cases are summarized in Table 6with timings in Table 5. The units for the anomalies are nT. (cid:96) = 12) it can be seen that the predicted anomalies are always better for the larger problem,and in particular the result shown in Figure 11(e) shows greater artifacts when using RSVD . he obtained reconstruction for this case, shown in Figure 11(g) is, however, acceptable.Overall, trading off between computational cost and solution quality, there seems little gainin using (cid:96) = 12 and the results with (cid:96) = 11 obtained with the GKB algorithm in 227 minutes(nearly 4 hours) are suitable. These results also show that it is sufficient to use a relativelysmaller projected space, t = floor ( m/
20) when m is larger. Indeed, notice that even inthese cases the largest matrix required by both algorithms is of size n × t p and requires17 . . (cid:96) = 11 and (cid:96) = 12, respectively. Effectively, it is this large memoryrequirement that limits the given implementation using either GKB or RSVD for larger sizeproblems. (a) 11 : (9 , , . , . . , . , , . , . . , . Figure 11.
The magnetic anomalies and reconstructed volumes using the
GKB and
RSVD algorithms in Figures 11(a)-11(d) and 11(e)-11(h), respectively. Thefirst row indicates the choice of (cid:96) = 11, for which t = 2268 = floor ( m/ m = 45375 and n = 998250 and oversampled projected problem of size 2381,or (cid:96) = 12, with t = 2700 = floor ( m/ m = 54000 and n = 1296000, andoversampled projected problem of size 2835. Given are the pairs ( K, Cost s),(number of iterations to convergence and computational cost in seconds) inthe captions of the anomalies, and ( RE , χ / ( m + √ m )) in the captions of thereconstructions. The units for the anomalies are nT.Numerical experiments for the inversion of gravity data, similar to the testing for themagnetic data, demonstrates that indeed the RSVD algorithm with power iteration is to bepreferred for the inversion of gravity data, yielding acceptable solutions at lower cost thanwhen using the
GKB algorithm. Representative results are detailed in Figure 12 for the sameparameter settings as given in Figure 11 for the magnetic problem.3.4.
Real Data.
For validation of the simulated results on a practical data set we applythe
GKB algorithm for the inversion of a magnetic field anomaly that was collected over aportion of the Wuskwatim Lake region in Manitoba, Canada. This data set was discussedin Pilkington [2009] and also used in Vatankhah et al. [2020a] for inversion using the
RSVD a) 11 : (21 , , . , . . , . , , . , . . , . Figure 12.
The gravity anomalies and reconstructed volumes using the
GKB and
RSVD algorithms in Figures 12(a)-12(d) and 12(e)-12(h), respectively. Thefirst row indicates the choice of (cid:96) = 11, for which t = 2268 = floor ( m/ m = 45375 and n = 998250 and oversampled projected problem of size 2381,or (cid:96) = 12, with t = 2700 = floor ( m/ m = 54000 and n = 1296000, andoversampled projected problem of size 2835. Given are the pairs ( K, Cost s),(number of iterations to convergence and computational cost in seconds) inthe captions of the anomalies, and ( RE , χ / ( m + √ m )) in the captions of thereconstructions. The units for the anomalies are mGal.algorithm with a single power iteration. Further details of the geological relevance of thisdata set is given in these references. Moreover, its use makes for direct comparison withthese existing results. Here we use a grid of 62 ×
62 = 3184 measurements at 100m intervalsin the East-North direction with padding of 5 cells yielding a horizontal cross section of size72 ×
72 in the East-North directions. The depth dimension is discretized with ∆ z = 100m,yielding a regular cube, to ∆ z = 8m for rectangular prisms with a smaller edge length inthe depth dimension for a total depth of 2000m, and providing increasing values of n from103680 to 1238976 as detailed in Table 3. The given magnetic anomaly is illustrated inFigure 13(a).In each inversion the GKB algorithm is run with t = 480, corresponding to t = floor ( m/ m = 3184 and oversampled projected space of size 504, and a noise distribution basedon (18) is employed using τ = .
02 and τ = . χ / ( m + √ m )) < UPRE function has well-defined minimum at the final iteration for all resolutions, and in Figure 14(b) thatthe convergence of the scaled χ value is largely independent of n . The final regularizationparameter α ( K ) decreases with increasing n , while the initial α found using (15) increaseswith n , as reported in Table 3. Easting (m) N o r t h i ng ( m ) -300-200-1000100200300400500600700 nT (a) Given anomaly. Easting (m) N o r t h i ng ( m ) -200-1000100200300400500600700 nT (b) Predicted: n = 103680. Easting (m) N o r t h i ng ( m ) -200-1000100200300400500600700 nT (c) Predicted: n = 1238976. Figure 13.
The given magnetic anomaly in Figure 13(a) and the obtainedpredicted anomalies for the inversion using the parameters for the first andlast lines of data in Table 3 in Figures 13(b)-13(c), respectively.Results of the inversion, for the coarsest and finest resolutions are presented in Figures 13,15 and 16, for anomalies, reconstructed volumes, and depth slices through the volume do-main, respectively. First, from Figures 13(b)-13(c), as compared to Figure 16b in Vatankhahet al. [2020a], we see that the predicted anomalies provide better agreement to the measuredanomaly, with respect to structure and the given values. Moreover, more structure is seenin the volumes presented in Figures 15(a)-15(b) as compared to Figure19 Vatankhah et al.[2020a], and the increased resolution provides greater detail in Figure 15(b)as compared toFigure 15(a). Here the volumes are presented for the depth from 0 to 1000m only, but itis seen in Figures 16(e) and 16(j), which are the slices at depth 1100m, that there is littlestructure evident at greater depth. Comparing the depth slices for increasing depth, we seethat the use of the higher resolution leads to more structure at increased depth. Moreover,the results are consistent with those presented in Vatankhah et al. [2020a] for the use of the
RSVD for a projected size t = 1100 as compared to t = 480 used here. It should also be notedthat the RSVD algorithm with one power iteration does not converge within 50 steps, underthe same configurations for m , n and t .4. Conclusions and Future Work
Two algorithms,
GKB and
RSVD , for the focused inversion of potential field data with alloperations for the sensitivity matrix G implemented using a fast algorithm have beendeveloped and validated for the inversion of both gravity and magnetic data sets. Theresults show first that it is distinctly more efficient to use the for operations withmatrix G rather than direct multiplication. This is independent of algorithm and dataset, for all large scale implementations considered. Moreover, the implementation using the makes it feasible to solve these large scale problems on a standard desktop computer n z ∆ z K α (1) α ( K ) χ / ( m + √ m ) Cost( s )103680 20 100 17 4 . e + 05 8558 0 .
87 334207360 40 50 18 5 . e + 06 5887 0 .
90 754305856 59 33 19 2 . e + 07 4930 0 .
70 1126414720 80 25 18 6 . e + 07 4116 0 .
95 1513518400 100 20 18 1 . e + 08 3701 0 .
94 2018616896 119 16 18 2 . e + 08 3386 0 .
90 2095829440 160 12 18 6 . e + 08 2933 0 .
95 30911036800 200 10 18 1 . e + 09 2627 0 .
95 36901238976 239 8 18 2 . e + 09 2396 0 .
96 4389
Table 3.
Inversion of magnetic data as illustrated in Figure 13 for m = 3844on a grid of 62 ×
62 stations, with ∆ x = ∆ y = 100m and padding of 5 cellsin both x and y -directions, yielding blocks of size n r = 5184. The inversionuses the GKB algorithm with t = 480 ( floor ( m/ t p = 504. The noisein the algorithm uses (18) as given for the simulations with τ = .
02 and τ = . U () (a) Regularization function for K . k / ( m + m ) (b) χ with k . Figure 14.
The plot of the regularization function U ( α ) for the UPRE algo-rithm, at the final iteration K for increasing values of n as indicated in Table 3in Figure 14(a) and the progression of the scaled χ estimate as a function ofiteration k and for increasing n in Figure 14(b).without any code modifications to handle multiple cores or GPUs, which is not possibledue to memory constraints when m and n increase. While both algorithms are improvedwith this implementation, the results show that the impact on the GKB efficiency is greaterthan that on the
RSVD efficiency. A theoretical analysis of the computational cost of eachalgorithm for a single iterative step demonstrates that the
GKB should be faster, but this is a) Iso-surface using n = 103680. (b) Iso-surface using n = 1238976. Figure 15.
The reconstructed volumes showing parameters κ > .
05 anddepth from 0 to 1000, corresponding to the predicted anomalies in Figure 13.
Easting (m) N o r t h i ng ( m ) (SI) (a) 300m Easting (m) N o r t h i ng ( m ) (SI) (b) 500m Easting (m) N o r t h i ng ( m ) (SI) (c) 700m Easting (m) N o r t h i ng ( m ) (SI) (d) 900m Easting (m) N o r t h i ng ( m ) (SI) (e) 1100m Easting (m) N o r t h i ng ( m ) (SI) (f) 300m Easting (m) N o r t h i ng ( m ) (SI) (g) 500m Easting (m) N o r t h i ng ( m ) (SI) (h) 700m Easting (m) N o r t h i ng ( m ) (SI) (i) 900m Easting (m) N o r t h i ng ( m ) (SI) (j) 1100m Figure 16.
Slices through the volumes illustrated in Figure 15 for depths300, 500, 700, 900 and 1100, for n = 103680 in Figures 16(a)-16(e) and for n = 1238976 in Figures 16(f)-16(j).not always realized in practice as the problem size increases, with commensurate increase inthe size of the projected space. Then, the efficiency of GKB deteriorates, and the advantageof using builtin routines from
Matlab for the
RSVD algorithm is crucial.When considering the computational cost to convergence for both algorithms, which alsothen includes the cost due to the requiring projected spaces that are of reasonable size relativeto m , the results confirm earlier published results that it is more efficient to use RSVD , with t ≥ floor ( m/
8) for inversion of gravity data. Moreover, generally larger projected spacesare required when using
RSVD for the inversion of magnetic data. On the other hand, prior ublished work did not contrast GKB with
RSVD for the inversion of magnetic data. Here,our results contribute a new conclusion to the literature, namely that
GKB is more efficientfor these large-scale problems and can use also t ≈ floor ( m/
8) rather than larger spacesfor use with
RSVD . Critically, which algorithm to use is determined by the spectral spacefor the underlying problem-specific sensitivity matrix G , as discussed in Vatankhah et al.[2020a]. Moreover, we can relax the restriction t ≈ floor ( m/ t ≈ m/
20 for large problems, for the inversion of magnetic data.It should be noted that equivalent conclusions can be made when the implementations usepadding, only that generally fewer iterations to convergence are required. Furthermore, allthe implementations use the automatic determination of the regularization parameter usingthe
UPRE function. The suitability of the
UPRE function was demonstrated in earlier refer-ences, and is thus not reproduced here, but results that are not reported here demonstratedthat the earlier results still hold for these large scale problems and algorithms.Overall, it has been shown that the use of the
BTTB structure inherent in the sensitivitymatrices leads to fast algorithms that make it feasible to solve large-scale focusing inversionproblems using standard
GKB and
RSVD algorithms on desktop environments, without modi-fications to handle either multiple cores or GPUs. It is clear that yet greater efficiency couldbe achieved with such modifications, that may then be architecture specific and thus lessflexible. Moreover, these results suggest that the development of alternative algorithms thatavoid the need to use storage of matrices of size n × t , is desirable and is a topic for futurestudy. Acknowledgments
The authors would like to thank Dr. Mark Pilkington for providing data from the Wuskwa-tim Lake area. Rosemary Renaut acknowledges the support of NSF grant DMS 1913136:“Approximate Singular Value Expansions and Solutions of Ill-Posed Problems”.
References
Richard J. Blakely.
Potential Theory in Gravity and Magnetic Applications . CambridgeUniversity Press, 1995. doi: 10.1017/CBO9780511549816.Olivier Boulanger and Michel Chouteau. Constraints in 3D gravity inversion.
GeophysicalProspecting , 49(2):265–280, 2001. ISSN 1365-2478. doi: 10.1046/j.1365-2478.2001.00254.x.URL http://dx.doi.org/10.1046/j.1365-2478.2001.00254.x .Christian Eske Bruun and Trine Brandt Nielsen. Algorithms and software for large-scalegeophysical reconstructions. Master’s thesis, Technical University of Denmark, DTU, DK-2800 Kgs. Lyngby, Denmark, 2007.Raymond Hon-Fu Chan and Xiao-Qing Jin.
An Introduction to Iterative Toeplitz Solvers .Society for Industrial and Applied Mathematics, 2007. doi: 10.1137/1.9780898718850.URL https://epubs.siam.org/doi/abs/10.1137/1.9780898718850 .Longwei Chen and Lanbo Liu. Fast and accurate forward modelling of gravity field usingprismatic grids.
Geophysical Journal International , 216(2):1062–1071, 11 2018. ISSN0956-540X. doi: 10.1093/gji/ggy480. URL https://doi.org/10.1093/gji/ggy480 .Leif H. Cox, Glenn A. Wilson, and Michael S. Zhdanov. 3D inversion of airborne electromag-netic data using a moving footprint.
Exploration Geophysics , 41(4):250–259, 2010. doi:10.1071/EG10003. URL https://doi.org/10.1071/EG10003 . olin G. Farquharson. Constructing piecewise-constant models in multidimensionalminimum-structure inversions. Geophysics , 73(1):K1–K9, 2008. doi: 10.1190/1.2816650.URL https://doi.org/10.1190/1.2816650 .Colin G. Farquharson and Douglas W. Oldenburg. A comparison of automatic techniquesfor estimating the regularization parameter in non-linear inverse problems.
GeophysicalJournal International , 156(3):411–425, 2004. doi: 10.1111/j.1365-246X.2004.02190.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1365-246X.2004.02190.x .G.H. Golub and C.F. Van Loan.
Matrix Computations . Johns Hopkins Studies in theMathematical Sciences. Johns Hopkins University Press, 2013. ISBN 9781421407944. URL https://books.google.com/books?id=X5YfsuCWpxMC .I. B. Ha´az. Relations between the potential of the attraction of the mass contained in afinite rectangular prism and its first and second derivatives.
Geophysical Transactions II ,7:57–66, 1953.N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Proba-bilistic algorithms for constructing approximate matrix decompositions.
SIAM Review , 53(2):217–288, 2011. doi: 10.1137/090771806. URL https://doi.org/10.1137/090771806 .P. C. Hansen.
Discrete Inverse Problems . Society for Industrial and Applied Mathematics,Philadelphia, 2010. doi: 10.1137/1.9780898718836. URL http://epubs.siam.org/doi/abs/10.1137/1.9780898718836 .Jarom D Hogue, Rosemary A Renaut, and Saeed Vatankhah. A tutorial and open sourcesoftware for the efficient evaluation of gravity and magnetic kernels. https://arxiv.org/abs/1912.06976 , 2019.Peter G. Leli`evre and Douglas W. Oldenburg. Magnetic forward modelling and inversionfor high susceptibility.
Geophysical Journal International , 166(1):76–90, 07 2006. ISSN0956-540X. doi: 10.1111/j.1365-246X.2006.02964.x. URL https://doi.org/10.1111/j.1365-246X.2006.02964.x .Kun Li, Long-Wei Chen, Qing-Rui Chen, Shi-Kun Dai, Qian-Jiang Zhang, Dong-Dong Zhao,and Jia-Xuan Ling. Fast 3D forward modeling of the magnetic field and gradient tensoron an undulated surface.
Applied Geophysics , 15(3):500–512, Sep 2018. ISSN 1993-0658.doi: 10.1007/s11770-018-0690-9. URL https://doi.org/10.1007/s11770-018-0690-9 .Xiong Li and Michel Chouteau. Three-dimensional gravity modeling in all space.
Surveysin Geophysics , 19(4):339–368, Jul 1998. ISSN 1573-0956. doi: 10.1023/A:1006554408567.URL https://doi.org/10.1023/A:1006554408567 .Yaoguo Li and Douglas W. Oldenburg. 3-D inversion of magnetic data.
Geophysics , 61(2):394–408, 1996. doi: 10.1190/1.1443968. URL https://doi.org/10.1190/1.1443968 .Yaoguo Li and Douglas W. Oldenburg. 3-D inversion of gravity data.
Geophysics , 63(1):109–119, 1998. doi: 10.1190/1.1444302. URL https://doi.org/10.1190/1.1444302 .Yaoguo Li and Douglas W. Oldenburg. Fast inversion of large-scale magnetic data usingwavelet transforms and a logarithmic barrier method.
Geophysical Journal International ,152(2):251–265, 02 2003. ISSN 0956-540X. doi: 10.1046/j.1365-246X.2003.01766.x. URL https://doi.org/10.1046/j.1365-246X.2003.01766.x .N. Luiken and T. van Leeuwen. Comparing RSVD and Krylov methods for linear inverseproblems.
Computers and Geosciences , 137:104427, 2020. ISSN 0098-3004. doi: https://doi.org/10.1016/j.cageo.2020.104427. URL . . N. Nabighian, M. E. Ander, V. J. S. Grauch, R. O. Hansen, T. R. LaFehr, Y. Li, W. C.Pearson, J. W. Peirce, J. D. Phillips, and M. E. Ruder. Historical development of thegravity method in exploration. Geophysics , 70(6):63ND–89ND, 11 2005. ISSN 0016-8033.doi: 10.1190/1.2133785. URL https://doi.org/10.1190/1.2133785 .Christopher C. Paige and Michael A. Saunders. LSQR: an algorithm for sparse linear equa-tions and sparse least squares.
ACM Trans. Math. Software , 8(1):43–71, 1982. ISSN 0098-3500. doi: 10.1145/355984.355989. URL http://dx.doi.org/10.1145/355984.355989 .Mark Pilkington. 3-D magnetic imaging using conjugate gradients.
Geophysics , 62(4):1132–1142, 08 1997. ISSN 0016-8033. doi: 10.1190/1.1444214. URL https://doi.org/10.1190/1.1444214 .Mark Pilkington. 3D magnetic data-space inversion with sparseness constraints.
Geo-physics , 74(1):L7–L15, 2009. doi: 10.1190/1.3026538. URL https://doi.org/10.1190/1.3026538 .Oleg Portniaguine and Michael S. Zhdanov. Focusing geophysical inversion images.
Geo-physics , 64(3):874–887, 1999. doi: 10.1190/1.1444596. URL http://geophysics.geoscienceworld.org/content/64/3/874.abstract .Oleg Portniaguine and Michael S. Zhdanov. 3D magnetic inversion with data compressionand image focusing.
Geophysics , 67(5):1532–1541, 2002. doi: 10.1190/1.1512749. URL https://doi.org/10.1190/1.1512749 .D. Bhaskara Rao and N. Ramesh Babu. A rapid method for three-dimensional modeling ofmagnetic anomalies.
Geophysics , 56(11):1729–1737, November 1991.R. A. Renaut, S. Vatankhah, and V. E. Ardestani. Hybrid and iteratively reweighted reg-ularization by unbiased predictive risk and weighted GCV for projected systems.
SIAMJournal on Scientific Computing , 39:B221–B243., 2017.J. B. C. Silva and V. C. F. Barbosa. Interactive gravity inversion.
Geophysics , 71(1):J1–J9,2006. doi: 10.1190/1.2168010. URL https://doi.org/10.1190/1.2168010 .L. Uieda and V. C. F. Barbosa. Robust 3D gravity gradient inversion by planting anomalousdensities.
Geophysics , 77(4):G55–G66, 2012. doi: 10.1190/geo2011-0388.1. URL https://doi.org/10.1190/geo2011-0388.1 .S. Vatankhah, V. E. Ardestani, S. S. Niri, R. A. Renaut, and H. Kabirzadeh. IGUG:A MATLAB package for 3D inversion of gravity data using graph theory.
Com-puters and Geosciences , 128:19 – 29, 2019. ISSN 0098-3004. doi: https://doi.org/10.1016/j.cageo.2019.03.008. URL .Saeed Vatankhah, Vahid E Ardestani, and Rosemary A Renaut. Automatic estimation of theregularization parameter in 2D focusing gravity inversion: application of the method tothe Safo manganese mine in the northwest of Iran.
Journal of Geophysics and Engineering ,11(4):045001, 2014.Saeed Vatankhah, Vahid E Ardestani, and Rosemary A Renaut. Application of the χ princi-ple and unbiased predictive risk estimator for determining the regularization parameter in3-D focusing gravity inversion. Geophysical Journal International , 200(1):265–277, 2015.Saeed Vatankhah, Rosemary A. Renaut, and Vahid E. Ardestani. 3-D projected (cid:96) inver-sion of gravity data using truncated unbiased predictive risk estimator for regularizationparameter estimation. Geophysical Journal International , 210(3):1872–1887, 2017. doi:10.1093/gji/ggx274. URL +http://dx.doi.org/10.1093/gji/ggx274 . aeed Vatankhah, Rosemary A. Renaut, and Vahid E. Ardestani. A fast algorithm forregularized focused 3-D inversion of gravity data using the randomized SVD. Geophysics ,2018.Saeed Vatankhah, Shuang Liu, Rosemary A. Renaut, Xiangyun Hu, and Jamaledin Baniame-rian. Improving the use of the randomized singular value decomposition for the inversionof gravity and magnetic data. https://arxiv.org/abs/1906.11221v1 , 2020a.Saeed Vatankhah, RosemaryAnne Renaut, and Shuang Liu. Research note: A unifyingframework for the widely used stabilization of potential field inverse problems.
Geo-physical Prospecting , 68:1416–1421, 2020b. doi: 10.1111/1365-2478.12926. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/1365-2478.12926 .Curt Vogel.
Computational Methods for Inverse Problems . Society for Industrial and AppliedMathematics, Philadelphia, 2002. doi: 10.1137/1.9780898717570. URL http://epubs.siam.org/doi/abs/10.1137/1.9780898717570 .Sergey Voronin, Dylan Mikesell, and Guust Nolet. Compression approaches for the reg-ularized solutions of linear systems from large-scale inverse problems.
GEM - Inter-national Journal on Geomathematics , 6(2):251–294, Nov 2015. ISSN 1869-2680. doi:10.1007/s13137-015-0073-9. URL https://doi.org/10.1007/s13137-015-0073-9 .Brendt Wohlberg and Paul Rodr´ıguez. An iteratively reweighted norm algorithm for mini-mization of total variation functionals.
Signal Processing Letters, IEEE , 14(12):948–951,2007.Hua Xiang and Jun Zou. Regularization with randomized SVD for large-scale discrete in-verse problems.
Inverse Problems , 29(8):085008, 2013. URL http://stacks.iop.org/0266-5611/29/i=8/a=085008 .Yile Zhang and Yau Shu Wong. BTTB-based numerical schemes for three-dimensional grav-ity field inversion.
Geophysical Journal International , 203(1):243–256, 08 2015. ISSN0956-540X. doi: 10.1093/gji/ggv301. URL https://doi.org/10.1093/gji/ggv301 .Guangdong Zhao, Bo Chen, Longwei Chen, Jianxin Liu, and Zhengyong Ren. High-accuracy3D Fourier forward modeling of gravity field based on the Gauss-FFT technique.
Journalof Applied Geophysics , 150:294 – 303, 2018. ISSN 0926-9851. doi: https://doi.org/10.1016/j.jappgeo.2018.01.002. URL . Appendix A. Multiplication using BTTB structure
We first consider the multiplication G x where x ∈ R n and use the column block structureof G which was given in (3) to see that G x = (cid:80) n z r =1 G ( r ) x ( r ) where x is blocked consistentlywith G . Now each G ( r ) has BTTB structure and can be embedded in a circulant matrix in orderto evaluate G ( r ) x ( r ) using the as described in Vogel [2002]. Specifically the first columnof the circulant extension is reshaped into T ∈ R ( s x + n x − × ( s y + n y − , and x ( r ) is reshaped andembedded into W ∈ R ( s x + n x − × ( s y + n y − , see Hogue et al. [2019]. Now we assume thatthe of T is precomputed and that ·∗ represents element-wise multiplication. Then, G ( r ) x ( r ) is extracted from ifft2 ( fft2 ( T ) · ∗ fft2 ( W )), with cost(21) Cost G ( r ) x ( r ) = Cost fft2 ( W ) + Cost ·∗ + Cost ifft2 () . Here, the of W is computed as (( ( W )) T ) T , where the is applied toeach column of the array independently. Using the cost of a as ( n/
2) log ( n ) for an -length vector, Vogel [2002], this gives, using n r ≈ m except when the padding is large, Cost fft2 ( W ) ≈ n y ( n x log (2 n x )) + 2 n x ( n y log (2 n y ))= 2 m (log (2 n x )) + log (2 n y )) = 2 m log (4 m ) . The element-wise complex multiplication in (21) is for a reshaped vector of size ( s x + n x − s y + n y − ≈ m , and each complex multiplication requires 6 flops . Furthermore, theinverse requires approximately the same number of operations as the forward .Hence Cost G ( r ) x ( r ) ≈ m log (4 m ) + 24 m, and(22) Cost G x ≈ mn z log (4 m ) + 24 mn z + ( m − n z ≈ n log (4 m ) + 25 n + LOT , where the first term is for the multiplication and the second for the summation over the n z vectors of length m . It is then immediate that the dominant cost for obtaining GX , for X ∈ R n × t p , ignoring all but third order terms is Cost GX ≈ t p n log (4 m ) + LOT . The derivation of the computation, and the cost, for obtaining G T y for y ∈ R m followssimilarly, noting that G T y = [ G (1) , G (2) , . . . G ( n z ) ] T y , requires the computation of ( G ( r ) ) T y for each r and that no summation is required as in (22). Hence Cost G T y ≈ n log (4 m ) and Cost GY ≈ t p n log (4 m ). Furthermore, we note that X T G T = ( GX ) T and Y T G = ( G T Y ) T .Thus, the computations and computational costs are immediately obtained from those of GX and G T Y , respectively. Appendix B. Supporting Numerical Results of Simulations
Supporting results illustrated as figures in Sections 3.3.1-3.3.3 are reported in a set oftables, with captions describing the details. Table 4 reports the timing for one iteration ofthe inversion algorithm using both
GKB and
RSVD algorithms for magnetic data inversion,comparing timings using matrix G directly and the . The time to convergence for thealgorithms is given in Table 4 for both magnetic and gravity data sets for domains withoutpadding. Tables 6-7 give the details of the number of iteration steps to convergence K andthe resulting relative errors, RE , for the timing results of Table 5. agnetic WITH 2DFFT Direct use of G(cid:96) t t p GKB RSVD PGKB PRSVD GKB RSVD PGKB PRSVD
Table 4.
Timing results in seconds for one step of the inversion algorithmapplied to magnetic potential field data for the simulations described in Ta-ble 2 without padding and with padding (indicated by P ), and for problemsizes up to (cid:96) = 7. t p = floor (1 . t ) is the size of the oversampled projectedspace for GKB and
RSVD implementations. The columns under
Direct use of G do not use the . These results are illustrated in Figures 4-7, along withthe equivalent set of results for the inversion of gravity data. ravity magnetic Cost GKB / Cost
RSVD (cid:96) t t p GKB RSVD GKB RSVD gravity magnetic ∗ .
40 0 .
234 240 252 111 79 60 282 ∗ .
40 0 .
214 300 315 153 100 70 351 ∗ .
53 0 .
204 750 787 248 216 136 883 ∗ .
15 0 .
154 1000 1050 323 285 197 1182 ∗ .
13 0 .
174 1500 1575 494 436 343 650 1 .
13 0 .
534 2000 2100 641 588 739 771 1 .
09 0 .
965 234 245 265 166 152 509 ∗ .
60 0 .
305 375 393 411 259 174 811 ∗ .
59 0 .
215 468 491 342 325 199 1014 ∗ .
05 0 .
205 1171 1229 1064 835 626 2582 ∗ .
27 0 .
245 1562 1640 1235 997 948 2121 1 .
24 0 .
455 2343 2460 1899 1492 1728 2126 1 .
27 0 .
815 3125 3281 2918 2052 2915 2971 1 .
42 0 .
986 337 353 595 296 246 923 ∗ .
01 0 .
276 540 567 671 424 347 1514 ∗ .
58 0 .
236 675 708 802 527 413 1877 ∗ .
52 0 .
226 1687 1771 2704 1385 1077 2581 1 .
95 0 .
426 2250 2362 2518 1597 1608 2937 1 .
58 0 .
556 3375 3543 4308 2483 3071 4142 1 .
73 0 .
746 4500 4725 6925 3429 6699 5109 2 .
02 1 .
317 459 481 1427 679 594 2157 ∗ .
10 0 .
287 735 771 1642 1104 1070 3524 ∗ .
49 0 .
307 918 963 2084 1218 1026 4413 ∗ .
71 0 .
237 2296 2410 5732 3311 3809 6618 1 .
73 0 .
587 3062 3215 6959 4490 5639 8469 1 .
55 0 .
677 4593 4822 12347 6979 10979 12949 1 .
77 0 .
857 5000 5250 13975 7711 12544 13239 1 .
81 0 . Table 5.
Timing results to convergence for inversion of gravity and magnetic potential field data for the simulations described in Table 2 withoutpadding, for problem sizes up to (cid:96) = 7. Entries with ∗ indicate that the algo-rithm did not converge. In the last two columns the relative costs of GKB ascompared to
RSVD . Values greater than 1, less than 1, indicate that the
RSVD is overall faster, slower, respectively. In general
RSVD is faster for inversionof gravity data but slower for inversion of magnetic data. Still, as problemsize increases, the relative efficiency of
GKB for the magnetic data decreases,
Cost
GKB / Cost
RSVD increases towards 1. Results for relative errors and numberof iterations are presented in Tables 6-7, for magnetic and gravity data,respectively. agnetic GKB RSVD PGKB PRSVD (cid:96) t t p K RE K RE K RE K RE .
71 25 0 .
72 5 0 .
63 25 0 .
794 240 252 9 0 .
68 25 0 .
69 5 0 .
63 25 0 .
724 300 315 8 0 .
66 25 0 .
69 5 0 .
63 25 0 .
744 750 787 5 0 .
63 25 0 .
64 4 0 .
63 25 0 .
654 1000 1050 5 0 .
63 25 0 .
63 4 0 .
63 19 0 .
654 1500 1575 5 0 .
63 9 0 .
63 5 0 .
62 8 0 .
644 2000 2100 7 0 .
61 8 0 .
63 6 0 .
60 7 0 .
635 234 245 12 0 .
81 25 0 .
82 6 0 .
71 25 0 .
895 375 393 8 0 .
72 25 0 .
78 6 0 .
69 25 0 .
825 468 491 7 0 .
70 25 0 .
77 6 0 .
69 25 0 .
805 1171 1229 7 0 .
66 25 0 .
70 6 0 .
66 25 0 .
715 1562 1640 7 0 .
66 15 0 .
70 6 0 .
66 12 0 .
715 2343 2460 7 0 .
65 10 0 .
67 6 0 .
66 9 0 .
685 3125 3281 8 0 .
65 10 0 .
67 8 0 .
66 9 0 .
686 337 353 10 0 .
74 25 0 .
73 5 0 .
67 25 0 .
776 540 567 8 0 .
69 25 0 .
69 5 0 .
67 25 0 .
736 675 708 7 0 .
67 25 0 .
68 5 0 .
67 25 0 .
716 1687 1771 5 0 .
64 13 0 .
66 5 0 .
65 10 0 .
686 2250 2362 5 0 .
64 11 0 .
65 5 0 .
66 10 0 .
686 3375 3543 5 0 .
64 10 0 .
64 5 0 .
67 9 0 .
666 4500 4725 7 0 .
62 9 0 .
63 5 0 .
67 9 0 .
667 459 481 11 0 .
78 25 0 .
80 6 0 .
69 25 0 .
807 735 771 10 0 .
74 25 0 .
75 6 0 .
69 25 0 .
767 918 963 7 0 .
69 25 0 .
73 6 0 .
69 25 0 .
757 2296 2410 7 0 .
67 14 0 .
70 5 0 .
72 12 0 .
727 3062 3215 7 0 .
68 13 0 .
70 6 0 .
69 11 0 .
717 4593 4822 7 0 .
68 13 0 .
69 6 0 .
70 11 0 .
727 5000 5250 7 0 .
68 12 0 .
68 6 0 .
70 11 0 . Table 6.
Results for inversion of magnetic potential field data for the simula-tions described in Table 2 without padding and with padding and for problemsizes up to (cid:96) = 7. The maximum number of iterations is set to 25 in allcases. t p is the size of the projected space for GKB and
RSVD implementations.Reported are the number of iterations to convergence, K , for convergence asdefined by (17), with K = 25 indicating that the simulation did not convergeto the given tolerance. The calculated relative error RE for the given K arealso given, for both unpadded and padded cases respectively.arealso given, for both unpadded and padded cases respectively.
RSVD implementations.Reported are the number of iterations to convergence, K , for convergence asdefined by (17), with K = 25 indicating that the simulation did not convergeto the given tolerance. The calculated relative error RE for the given K arealso given, for both unpadded and padded cases respectively.arealso given, for both unpadded and padded cases respectively. ravity GKB RSVD PGKB PRSVD (cid:96) t t p K RE K RE K RE K RE .
00 8 0 .
56 20 1 .
00 8 0 .
564 240 252 16 0 .
97 7 0 .
56 17 0 .
97 7 0 .
564 300 315 17 0 .
97 7 0 .
56 17 0 .
96 7 0 .
564 750 787 9 0 .
76 6 0 .
57 10 0 .
76 6 0 .
574 1000 1050 8 0 .
66 6 0 .
57 8 0 .
66 6 0 .
574 1500 1575 7 0 .
64 6 0 .
57 7 0 .
64 6 0 .
574 2000 2100 6 0 .
62 6 0 .
57 7 0 .
64 7 0 .
585 234 245 21 1 .
05 8 0 .
49 21 1 .
05 9 0 .
515 375 393 19 1 .
01 8 0 .
50 21 1 .
03 9 0 .
535 468 491 12 0 .
82 8 0 .
50 13 0 .
84 8 0 .
535 1171 1229 12 0 .
78 8 0 .
53 11 0 .
79 8 0 .
575 1562 1640 9 0 .
66 7 0 .
53 9 0 .
68 8 0 .
585 2343 2460 8 0 .
65 7 0 .
55 8 0 .
67 8 0 .
595 3125 3281 8 0 .
64 7 0 .
57 8 0 .
66 7 0 .
606 337 353 24 1 .
03 8 0 .
56 23 1 .
03 8 0 .
606 540 567 15 0 .
90 7 0 .
58 15 0 .
93 7 0 .
616 675 708 14 0 .
89 7 0 .
58 14 0 .
91 7 0 .
626 1687 1771 13 0 .
85 7 0 .
61 12 0 .
85 6 0 .
646 2250 2362 8 0 .
70 6 0 .
62 8 0 .
71 6 0 .
646 3375 3543 7 0 .
69 6 0 .
63 8 0 .
71 6 0 .
646 4500 4725 7 0 .
69 6 0 .
63 8 0 .
70 6 0 .
647 459 481 24 1 .
07 8 0 .
56 25 1 .
08 8 0 .
597 735 771 16 0 .
95 8 0 .
57 16 0 .
95 8 0 .
597 918 963 15 0 .
93 7 0 .
58 15 0 .
94 7 0 .
607 2296 2410 11 0 .
75 7 0 .
60 11 0 .
75 7 0 .
607 3062 3215 9 0 .
72 7 0 .
60 10 0 .
73 7 0 .
617 4593 4822 8 0 .
70 7 0 .
61 9 0 .
71 7 0 .
617 5000 5250 8 0 .
70 7 0 .
61 9 0 .
71 7 0 . Table 7.
Results for inversion of gravity potential field data for the simula-tions described in Table 2 without padding and with padding and for problemsizes up to (cid:96) = 7. The maximum number of iterations is set to 25 in allcases. t p is the size of the projected space for GKB and
RSVD implementations.Reported are the number of iterations to convergence, K , for convergence asdefined by (17), with K = 25 indicating that the simulation did not convergeto the given tolerance. The calculated relative error RE for the given K arealso given, for both unpadded and padded cases respectively.arealso given, for both unpadded and padded cases respectively.