A comparison of matrix-free isogeometric Galerkin and collocation methods for Karhunen--Loève expansion
Michal Lukasz Mika, René Rinke Hiemstra, Thomas Joseph Robert Hughes, Dominik Schillinger
NNoname manuscript No. (will be inserted by the editor)
A comparison of matrix-free isogeometric Galerkin andcollocation methods for Karhunen–Lo`eve expansion
Michal L. Mika · Ren´e R. Hiemstra · Thomas J.R. Hughes · DominikSchillinger
Received: date / Accepted: date
Abstract
Numerical computation of the Karhunen–Lo`eve expansion is computationally challenging in termsof both memory requirements and computing time. Wecompare two state-of-the-art methods that claim to ef-ficiently solve for the K–L expansion: (1) the matrix-free isogeometric Galerkin method using interpolationbased quadrature proposed by the authors in [1] and(2) our new matrix-free implementation of the isogeo-metric collocation method proposed in [2]. Two three-dimensional benchmark problems indicate that the Gal-erkin method performs significantly better for smoothcovariance kernels, while the collocation method per-forms slightly better for rough covariance kernels.
Keywords
Karhunen–Lo`eve expansion · Galerkin · collocation · matrix-free · isogeometric analysis The Karhunen–Lo`eve (K–L) expansion decomposes arandom field into an infinite linear combination of L Michal L. MikaE-mail: [email protected]´e R. HiemstraE-mail: [email protected] SchillingerE-mail: [email protected] of Mechanics and Computational MechanicsLeibniz University HannoverAppelstr. 9a, 30167 Hannover, GermanyThomas J.R. HughesE-mail: [email protected] Institute for Computational Engineering and ScienceThe University of Texas at Austin201 East 24th Street, C0200, Austin, TX 78712-1229 USA orthogonal functions with decreasing energy content.Truncated representations have applications in stochas-tic finite element analysis (SFEM) [3,4,5], proper or-thogonal decomposition (POD) [6,7] and in image pro-cessing where the technique is known as principal com-ponent analysis (PCA) [8]. All these techniques areclosely related and widely used in practice [9].Numerical approximation of the K–L expansion bymeans of the Galerkin or collocation method leads to ageneralized eigenvalue problem:
Find ( v hk , λ hk ) ∈ R N × R + such that Av h = λ hk Zv h for k = 1 , , . . . , M. (1)This matrix problem is computationally challenging forthe following reasons: (1) the matrix A is dense and thusmemory intensive to store explicitly; (2) every iterationof an iterative eigenvalue solver requires a backsolveof a factorization of Z ; and (3) the assembly of A iscomputationally expensive .In this paper, we investigate and compare two state-of-the-art methods that were recently proposed to effi-ciently solve for the K–L expansion. The first methodis the matrix-free isogeometric Galerkin method pro-posed by the authors in [1], which uses an advancedquadrature technique to gain high performance that isscalable with polynomial order. The second method isour new matrix-free implementation of the isogeomet-ric collocation method proposed in [2]. As a collocationmethod it requires far fewer quadrature points than astandard Galerkin method such that the assembly ofthe collocation equations is simple and efficient. Formation and assembly costs for a standard Galerkinmethod scale O ( N e · ( p + 1) d )), where N e is the number offinite elements, p is the polynomial degree and d is the spatialdimension. a r X i v : . [ c s . C E ] J a n Michal L. Mika et al.
This paper is structured as follows. In Section 2,we briefly review the basic aspects of the K–L expan-sion in the context of random field represenations. InSection 3, we concisely present the two matrix-free so-lution methods and assess their algorithmic complex-ity. Three-dimensional numerical benchmark problemswith comparisons in terms of accuracy and solutiontime are provided in Section 4. We summarize our con-clusions in Section 5 and discuss future work.
Consider a complete probability space (
Θ, Σ, P ) where Θ denotes a sample set of random events and P is aprobability measure P : Σ → [0 , α ( · , θ ) : Θ (cid:55)→ L ( D ) denote a random field on a bounded domain D ∈ R d with mean µ ( x ) ∈ L ( D ) and covariance func-tion Γ ( x, x (cid:48) ) ∈ L ( D × D ). The K–L expansion of therandom field α ( · , θ ) requires the solution of an integraleigenvalue problem. Consider the self-adjoint positivesemi-definite linear operator T : L ( D ) (cid:55)→ L ( D ),( T φ ) ( x ) := (cid:90) D Γ ( x, x (cid:48) ) φ ( x (cid:48) ) d x (cid:48) . (2)The eigenfunctions { φ i } i ∈ N of T are defined by the ho-mogeneous Fredholm integral eigenvalue problem of thesecond kind, T φ i = λ i φ i , φ i ∈ L ( D ) for i ∈ N . (3)The eigenfunctions φ i are orthonormal in L ( D ) andthe corresponding eigenvalues form a non-increasing se-quence λ ≥ λ ≥ · · · ≥
0. The K–L expansion of therandom field α ( · , θ ) is given as α ( x, θ ) = µ ( x ) + ∞ (cid:88) i =1 (cid:112) λ i φ i ( x ) ξ i ( θ ) (4)where ξ i ( θ ) := 1 √ λ i (cid:90) D ( α ( x, θ ) − µ ( x )) φ i ( x ) d x. (5)Truncating the series in (4) after M terms leads to anapproximation of α denoted by α M . For practical com-putations in the context of stochastic finite elementmethods [3,4,5], the truncation order M is typicallychosen between 20 and 30 terms [10,4]. Each term in theexpansion introduces one stochastic dimension, which isan example for the curse of dimensionality . In this section we briefly review the matrix-free Ga-lerkin method proposed in [1] and introduce our matrix-free implementation of the isogeometric collocation me-thod proposed in [2]. We include an analysis of the algo-rithmic complexity in terms of the polynomial degree p and number of elements N e of the d -dimensional spatialdomain D .In both approaches the generalized algebraic eigen-value problem is first reformulated as a standard alge-braic eigenvalue problem using standard linear algebratechniques [11]: Find ( v hk , λ hk ) ∈ R N × R + s.t. (cid:40) A (cid:48) v (cid:48) k = λ hk v (cid:48) k v hk = Cv (cid:48) k for k = 1 , , . . . , M. (6)Here C is an invertible mapping that depends on Z and A (cid:48) can be written in terms of A and Z .3.1 Matrix-free isogeometric Galerkin methodA variational treatment of (3) leads to the followingproblem: Find ( φ, λ ) ∈ L ( D ) × R + s.t. ∀ ψ ∈ L ( D ) (cid:90) D (cid:18)(cid:90) D Γ ( x, x (cid:48) ) φ ( x (cid:48) ) d x (cid:48) − λφ ( x ) (cid:19) ψ ( x ) d x = 0 . (7)From equation (7), the Galerkin method is obtainedby replacing φ, ψ ∈ L ( D ) by finite dimensional repre-sentations φ h , ψ h ∈ S h ⊂ L ( D ). Being posed in thevariational setting, Galerkin methods inherit several ad-vantageous properties such as exact L orthogonality ofthe numerical eigenvectors and monotonic convergenceof the numerical eigenvalues [12,13]. Furthermore, pow-erful tools exist in the variational setting to study thestability and convergence of the method .With a trial space S h := span { N i ( x ) } i =1 ,...,N theGalerkin method leads to the eigenvalue problem de-fined in (1) with the system matrices A ij := (cid:90) D N i ( x ) (cid:90) D Γ ( x, x (cid:48) ) N j ( x (cid:48) ) d x (cid:48) d x (8a) Z ij := (cid:90) D N i ( x ) N j ( x ) d x (8b)Alternatively, the eigenvalue problem can be solved inthe standard form introduced in equation (6) where A (cid:48) := L − AL −(cid:62) and C := L −(cid:62) . The matrix L is de-fined by the lower triangular matrix in the Choleskydecomposition of Z = LL (cid:62) . In general the stability and convergence analysis are chal-lenging in the context of collocation methods. comparison of matrix-free isogeometric Galerkin and collocation methods for Karhunen–Lo`eve expansion 3
Typically, the space S h is spanned by piecewise C -continuous polynomial functions on quadrilateral, he-xagonal or simplicial elements [13]. Recently, non-uni-form rational B-splines (NURBS) have been applied inthe context of an isogeometric Galerkin method [14].These methods commonly evaluate the integrals in (8)using standard numerical quadrature rules. A Gauss–Legendre numerical quadrature rule leads, however, toan algorithmic complexity of O ( N e · ( p +1) d ) [1], whichbecomes excessively expensive with the number of ele-ments N e , polynomial degree p and spatial dimension d .Furthermore, as mentioned in the introduction, the ma-trix A is dense and requires O (8 · N ) bytes of storagein double precision arithmetic, where N is the numberof degrees of freedom in the trial space.To overcome these limitations, the matrix-free Ga-lerkin method proposed in [1] avoids storing the mainsystem matrix A and achieves computational efficiencyby utilizing a non-standard trial space in combinationwith a specialized quadrature technique, called interpo-lation based quadrature . This approach requires a min-imum number of quadrature points and enables appli-cation of global sum factorization techniques [15]. Wesketch the main ideas of the method and refer to [1] forfurther details.Let { B i (ˆ x ) } i =1 ,...,N and { ˜ B j (ˆ x ) } j =1 ,..., ˜ N denote twosets of tensor product B-splines of, for simplicity, uni-form polynomial degree p . The first set is used in thedefinition of the trial space, whereas the second set isused in a projection of the kernel Γ ( x, x (cid:48) ) and is a partof the interpolation based quadrature . Let F : ˆ D → D be the geometric mapping from the reference domainto the physical domain. The trial space is defined as S h := span (cid:110) B i (ˆ x ) / (cid:112) det D F (ˆ x ) (cid:111) i =1 ,...,N. (9)The advantage of this particular choice of the trial spaceis that the mass matrix in (8b) has a Kronecker struc-ture and can be factored as Z = Z d ⊗ · · · ⊗ Z ⊗ Z ,where { Z k } k =1 , ,...,d are univariate mass matrices. Byleveraging this factorization the matrix-vector productsof Kronecker matrices can be evaluated in nearly lineartime complexity. This also holds for the matrix L inthe Cholesky factorization of Z , which is factored as L = L d ⊗ · · · ⊗ L ⊗ L from which the respective inversefollows as L − = L − d ⊗ · · · ⊗ L − ⊗ L − .The interpolation based quadrature in combinationwith the choice of the trial space in (9) leads to a fac-torization of the matrix A as A = M (cid:62) ˜B − JΓJ˜B −(cid:62) M . (10)Here Γ := Γ ( x i , x j ) ∈ R ˜ N × ˜ N is the covariance ker-nel evaluated at the Greville abscissae, J ∈ R ˜ N × ˜ N is the square root of a diagonal matrix of determinantsof the Jacobian of the mapping at these points andthe matrices ˜B = ˜B d ⊗ · · · ⊗ ˜B ⊗ ˜B ∈ R ˜ N × ˜ N and M = M d ⊗ · · · M ⊗ M ∈ R ˜ N × N are Kronecker productmatrices. In fact ˜B k and M k , k = 1 , , . . . , d , are univari-ate collocation and mass matrices, respectively, whichare introduced by the interpolation based quadrature.The computation of the eigenvalues and eigenvectors re-quires evaluation of matrix-vector products v (cid:48) (cid:55)→ A (cid:48) v (cid:48) .This leads to a nine step algorithm presented in [1]. Thematrix-vector products with the Kronecker structuredmatrices L −(cid:62) , M , B −(cid:62) and the diagonal matrix J aswell as all the respective transpose operations are per-formed in linear or nearly linear time complexity. Thematrix-vector products with the matrix Γ are performedin quadratic time complexity. Hence, our matrix-free al-gorithm scales quadratically with the dimension of theinterpolation space ˜ N . We note that in this algorithm,the matrix rows of Γ are computed on the fly, whichsaves memory by not explicitly storing the dense ma-trix Γ . Memory requirements for the remaining matricesare negligible, since they are either diagonal or Kro-necker product matrices. For additional details aboutthe matrix-free method, interpolation based quadratureand Kronecker products, we refer to [1].3.2 Matrix-free isogeometric collocation methodIn contrast to a Galerkin method, a collocation methoddoes not treat the integral equation (3) in a variationalmanner. Instead, we require the discretized residual r h ( x ) := (cid:90) D Γ ( x, x (cid:48) ) φ h ( x (cid:48) ) d x (cid:48) − λ h φ h ( x ) (11)to vanish at distinct points x ∈ D . In [2], the geometryand trial spaces are discretized in terms of NURBS basisfunctions S h := { R i ( x ) } i =1 ,...,N (12)in the sense of the isoparametric approach of isogeomet-ric analysis. In this study, we choose to collocate (11)at the Greville abscissae { x i } i =1 ,...,N . The method isexpressed concisely in matrix form (1) where the corre-sponding system matrices are given by A ij := (cid:90) D Γ ( x i , x (cid:48) ) R j ( x (cid:48) ) d x (cid:48) , Z ij := R j ( x i ) . (13)In primal form (6), this means that A (cid:48) = Z − A and C is the identity matrix. The matrices A and Z are squareand, in general, not symmetric. In contrast to varia-tional methods, where the system matrices are symmet-ric and positive (semi)-definite by construction, colloca-tion methods do not ensure a real-valued eigensolution Michal L. Mika et al. for any element size h >
0. For an in-depth exposition ofthe collocation method, we refer the reader to [12], andto [16,17] for details on the isogeometric formulation.The matrix-free version of the collocation method isderived analogously to the matrix-free Galerkin methoddescribed above. Due to the properties of the systemmatrix Z , instead of the Cholesky decomposition em-ployed in the Galerkin method, we use the pivoted LU decomposition, PZQ = LU , to arrive at the standardmatrix form. We observed that without pivoting thematrix-free collocation method suffers from numericalinstabilities at polynomial orders p >
3. We use the piv-oted LU decomposition of Z to apply the inverse of Z to the matrix A and thus obtain A (cid:48) . The standard al-gebraic eigenvalue problem is then given by A (cid:48) v (cid:48) = λ v (cid:48) where A (cid:48) := QU − L − PA (14)Following [1], we choose a row-wise evaluation of thecoefficient vector in the standard matrix-vector product v (cid:48) (cid:55)→ A (cid:48) v (cid:48) . The optimal evaluation order and furtherdetails for each step are given in Algorithm 1. Algorithm 1
Matrix-free evaluation of the matrix-vector product v (cid:48) (cid:55)→ A (cid:48) v (cid:48) emerging from collocation Input : v j ∈ R N , R jk ∈ R N × ( N e · N q ) , P ij , Q ij , U ij , L ij ∈ R N × N , J k ∈ R N e · N q , W k ∈ R N e · N q Output : v (cid:48) i ∈ R N y k ← R jk v j (cid:46) Interpolation at quadrature points2: y (cid:48) k ← y k (cid:12) J k (cid:12) W k (cid:46) Scaling at quadrature points3: z l ← Γ lk y (cid:48) k (cid:46) Kernel evaluation one row at a time4: v (cid:48) i ← Q it U − tr L − rs P sl z l (cid:46) Backsolve using LU of Z Matrix-free Galerkin method
Under the assumption of˜ N ∝ N , the formation and assembly costs are negligi-ble compared to the matrix-vector products that scaleindependently of p as O ( ˜ N ) [1]. The total cost of themethod scales as O ( N iter · ˜ N ), where N iter is the num-ber of iterations of the eigenvalue solver. Matrix-free collocation method
We are interested in thealgorithmic complexity of an element-wise assembly pro-cedure for the system matrices that arise from the col-location method. We assume that (1) ˆ D has N e ele-ments; (2) the products on every d -dimensional ele-ment (cid:3) d in ˆ D are integrated with a quadrature rule Q ( f ) := (cid:80) N q k =1 w k f ( x k ) with 1 ≤ N q ≤ ( p +1) d quadra-ture points; and (3) the number of collocation points N c is equal to the number of degrees of freedom N . The leading term in the total cost of formation and assemblyarises from the cost of forming the element matrices, A eij = (cid:90) (cid:3) d Γ (ˆ x i , ˆ x (cid:48) ) R j (ˆ x (cid:48) ) dˆ x (cid:48) ≈ N q (cid:88) k =1 w k Γ (ˆ x i , ˆ x (cid:48) k ) B j (ˆ x (cid:48) k ) = C ik D kj where C ik = w k Γ (ˆ x i , ˆ x (cid:48) k ) and D kj = R j (ˆ x (cid:48) k )with i = 1 , . . . , N and j = 1 , . . . , ( p +1) d . The formationcost of C and D is negligible. The matrix-matrix productcost is of O (cid:0) N c N q ( p + 1) d (cid:1) and the cost for summationover all N e is of O (cid:0) N e N c N q ( p + 1) d (cid:1) . Now, assuminga Gauss–Legendre quadrature rule with N q := ( p + 1) d quadrature points and the proportionality relationship N e ∝ N , a collocation method with N c = N has aleading cost of O (cid:0) N ( p + 1) d (cid:1) .The algorithmic complexity in the matrix-free for-mulation is driven by the most expensive steps in Al-gorithm 1. In a single iteration of the eigenvalue solver,steps 1 and 3 have a complexity O ( N · N e · N q ). Theelement-wise multiplication in step 2 scales linearly withthe number of quadrature points, O ( N e · N q ). The laststep scales as O ( N ). Evidently, steps 1 and 3 dependon the number of quadrature points. Since N e · N q ≥ N ,they determine the overall cost of the method. Assum-ing a Gauss-Legendre quadrature rule with N q := ( p +1) d quadrature points in each element and N e ∝ N ,the leading cost of a single iteration of the eigenvaluesolver is O ( N ( p + 1) d ). Hence, the total cost of thematrix-free isogeometric collocation method scales as O ( N iter · N ( p + 1) d ), where N iter is the number of it-erations of the eigenvalue solver. Comparison
Compared to the matrix-free Galerkin me-thod with interpolation based quadrature, the colloca-tion method scales unfavourably with the polynomialdegree. Furthermore, due to the lack of Kronecker struc-ture, it is necessary to compute the pivoted LU decom-position of the full matrix Z . The computational cost ofthis factorization increases with N as well as p , whichis due to an increasing bandwidth of the matrix Z . Remark 1
If the trial space in the collocation method isbased on tensor product B-splines instead of NURBS,then the matrix Z is also a Kronecker product matrix,alleviating the disadvantage at large N and p . In this section, we compare the accuracy and efficiencyof the matrix-free isogeometric Galerkin and colloca-tion methods. In [1], it was shown that the proposed comparison of matrix-free isogeometric Galerkin and collocation methods for Karhunen–Lo`eve expansion 5
R r H b = 0.5 L = 10 r = 8 R = 10 H = 15 Fig. 1: Benchmark geometry of a half-cylinder. The cor-relation length bL = 5 is used throughout all cases.Galerkin method performed especially well in the caseof a smooth covariance kernel. For rough kernels, suchas the C exponential kernel, the interpolation basedquadrature performed suboptimally.In our study, we benchmark both methods for twokernels of different smoothness and appropriate refine-ment strategies of the spaces involved: (1) the expo-nential kernel together with h -refinement and (2) theGaussian kernel and k -refinement. In both variants, thesolution space is equal for the Galerkin and collocationmethods. The interpolation space used in the Galerkinmethod is defined on the same mesh as the solutionspace, but its continuity is adapted in accordance withthe remarks made in [1]. All computations are per-formed sequentially on a laptop machine with an In-tel(R) Core(TM) i7-9750H CPU @ 2.60GHz as well as2x16 GB of DDR4 2666MHz RAM. Our reference so-lution is the standard isogeometric Galerkin solutioncomputed on the finest possible mesh with a runtimeof roughly 17 hours, tabulated in [1].4.1 Exponential covariance kernelIn Example 1, we compare the performance with re-spect to h -refinement assuming an exponential kernelon the half-cylindrical domain shown in Figure 1. Thepolynomial order in each parametric direction is p = 2.We choose a tensor product Gauss–Legendre quadra-ture rule with ( p + 1) points per element of the do-main in the collocation method. In accordance with re-marks made in [1] the continuity of the interpolationspace of the Galerkin method at the element interfacesis reduced to C . Furthermore, at element interfaceswhere the geometry is C , the interpolation space ofthe Galerkin method is set to C − . –18 180 Fig. 2: First, second, fourth and sixth eigenfunctions(Example 1, Case 1).Fig. 3: Line plot in the circumferential direction at themid-planes of eigenfunctions in Figure 2. Line-widthdecreases with increasing mode number.Our comparative investigation is based on five dif-ferent resolution cases with respect to the characteris-tic size h of the solution and interpolation mesh. Ourspecific choices of mesh size and number of degrees offreedom in the interpolation and solution spaces aresummarized in Table 1.For Case 1, we visualize the first, second, fourth andsixth eigenfunctions computed by both methods, plot-ted in Figure 2 on the half-cylinder domain. Figure 3illustrates that already for the coarsest resolution, bothmethods produce results that are practically indistin- Michal L. Mika et al.
Table 1: Mesh, solution space and interpolation spacedetails in Example 1.Case 1 Case 2 Case 3 Case 4 Case 5 h N N h mesh size in the solution and interpolation mesh N number of degrees of freedom (dof) in the solution space˜ N number of dof in the interpolation space (IBQ-Galerkin only) guishable from each other when plotted along a selectedcut line.For a quantitative comparison, let us introduce arelative eigenvalue error ε i with respect to the referencesolution as ε i := ε ( λ ref i , λ hi ) := | λ ref i − λ hi | λ ref i (15)as well as a mean relative eigenvalue error ε given by ε := 1 M M (cid:88) i =1 ε i = 1 M M (cid:88) i =1 | λ ref i − λ hi | λ ref i . (16)Table 2: Color-coding to differentiate between five dif-ferent cases and two different methods. GalerkinCollocation Case 1 Case 2 Case 3 Case 4 Case 5
To enable a concise illustration with respect to thefive cases defined in Table 1, we define the color cod-ing shown in Table 2. Blue indicates results obtainedwith the Galerkin method, red indicates results ob-tained with the collocation method. The change in shad-ing from light to full color indicates the increasing meshresolution from Case 1 to Case 5.Figure 4 depicts relative accuracy versus compu-tational time of the iterative eigensolver for the firsttwenty eigenvalues measured against the reference solu-tion. We observe that the collocation method performsroughly twice as fast at the same level of accuracy.In Figure 5, we present a detailed assessment of theaccuracy of the first five eigenvalues. In addition, weprovide an alternative visualization of the timings andthe error in the first twenty eigenvalues. 4.2 Gaussian covariance kernelIn Example 2, we compare both methods for a smoothGaussian covariance kernel. Since the integrand issmooth, we expect that optimally smooth approxima-tion spaces work best. Therefore, we fix the polynomialorder p and refine the approximation spaces with C p − continuity between elements until a target mesh size of2 .
857 is reached ( k -refinement). The resulting five dif-ferent cases are summarized in Table 3.Comparing Case 1 in Example 1 with Case 1 in Ex-ample 2, we find that the number of degrees of freedomin the interpolation space is smaller. This is due to theFig. 4: Mean relative eigenvalue error computed withthe first 20 eigenvalues versus the eigensolver time (Ex-ample 1, exponential kernel). λ λ λ λ λ − − − R e l a t i v ee i g e n v a l u ee rr o r ε i G C 10 − − − M e a n r e l. e i g e n v a l u ee rr o r ε − Eigensolver time [min]GC
Fig. 5: Error of the first five eigenvalues plotted forCases 1–3 and corresponding timings and accuracy overthe first 20 eigenvalues (Example 1, exponential kernel). comparison of matrix-free isogeometric Galerkin and collocation methods for Karhunen–Lo`eve expansion 7
Table 3: Mesh, solution space and interpolation spacedetails in Example 2.Case 1 Case 2 Case 3 Case 4 Case 5 p N N p polynomial order of the solution and interpolation space N number of degrees of freedom (dof) in the solution space˜ N number of dof in the interpolation space (IBQ-Galerkin only) Fig. 6: Mean relative eigenvalue error computed withthe first 20 eigenvalues versus the eigensolver time (Ex-ample 2, smooth Gaussian kernel).increased continuity at element interfaces of the inter-polation space of the Galerkin method. This trend isalso characteristic for k -refinement and is observable inthe remaining Cases 2–5.We resort again to the color coding of Table 2 toconcisely differentiate between the five different reso-lutions and the two methods. Figure 6 plots the meanrelative accuracy of the first twenty eigenvalues versusthe eigensolver timings. It is evident that for the smoothGaussian kernel, the Galerkin method outperforms thecollocation method by more than one order of magni-tude. Furthermore, in line with the complexity analysispresented in Section 3.3, we observe that the perfor-mance gap increases with increasing polynomial order.Following the scheme of Figure 5, we provide a moredetailed account of the approximation accuracy of thefirst five eigenvalues in Figure 7. λ λ λ λ λ − − − − − R e l a t i v ee i g e n v a l u ee rr o r ε i G C 10 − − − − − M e a n r e l. e i g e n v a l u ee rr o r ε − Eigensolver time [min]GC
Fig. 7: Error of the first five eigenvalues plotted forCases 1–3 and corresponding timings and accuracy overthe first 20 eigenvalues (Example 1, smooth Gaussiankernel).
In this paper, we compared accuracy versus the com-putational time of two state-of-the-art isogeometric dis-cretization methods for the numerical approximationof the truncated Karhunen–Lo`eve expansion. The firstmethod is the matrix-free isogeometric Galerkin methodproposed by the authors in [1]. It achieves its compu-tational efficiency by combining a non-standard trialspace with a specialized quadrature technique called interpolation based quadrature . This method requires aminimum of quadrature points and relies heavily onglobal sum factorization. The second method is ournew matrix-free version of the isogeometric collocationmethod proposed in [2]. This method achieves its com-putational performance by virtue of a low number ofpoint evaluations at collocation points.On the one hand, our comparative study showedthat for a C -continuous exponential kernel, the matrix-free collocation method was about twice as fast at thesame level of accuracy as the Galerkin method. Onthe other hand, our comparative study showed thatfor a smooth Gaussian kernel, the matrix-free Galerkinmethod was roughly one order of magnitude faster thanthe collocation method at the same level of accuracy.Furthermore, the computational advantage of the Galer-kin method over the collocation method increases withincreasing polynomial degree. These results are not sur-prising, since it was already shown in [1] that interpola-tion based quadrature scales virtually independently ofthe polynomial degree. In our study, we also illustratedvia complexity analysis that the matrix-free collocation Michal L. Mika et al. method scales unfavorably with polynomial order. Thesuboptimal accuracy of the interpolation based quadra-ture for rough kernels is also known and was alreadydiscussed by the authors in [1]. Besides the aspect ofcomputational performance, we also showed that bothmethods are highly memory efficient by virtue of theirmatrix-free formulation.As for future work, the advantageous properties in-herited by the Galerkin method such as symmetric, pos-itive (semi-)definite system matrices, monotonic con-vergence of the solution and availability of establishedmathematical framework for stability and convergencedeserve a more detailed theoretical discussion with re-gard to the interpolation based quadrature method. Ageneralized accuracy study and more numerical bench-marks with existing methods are desirable as well.
Acknowledgements
D. Schillinger gratefully acknowledgesfunding from the German Research Foundation (DFG) throughthe Emmy Noether Award SCH 1249/2-1.
References
1. M.L. Mika, T.J.R. Hughes, D. Schillinger, P. Wriggers,and R.R. Hiemstra. A matrix-free isogeometric Galerkinmethod for Karhunen-Lo`eve approximation of randomfields using tensor product splines, tensor contraction andinterpolation based quadrature. arXiv:2011.13861 [cs] ,November 2020.2. R. Jahanbin and S. Rahman. An isogeometric collocationmethod for efficient random field discretization.
Interna-tional Journal for Numerical Methods in Engineering ,117(3):344–369, January 2019.3. A. Keese. A Review of Recent Developments in the Nu-merical Solution of Stochastic Partial Differential Equa-tions (Stochastic Finite Elements).
Braunschweig, Insti-tut f¨ur Wissenschaftliches Rechnen , 2003.4. G. Stefanou. The stochastic finite element method: Past,present and future.
Computer Methods in Applied Me-chanics and Engineering , 198:1031–1051, 2009.5. B. Sudret and A. Kuyreghian.
Stochastic finite elementmethods and reliability: a state-of-the-art report.
Berke-ley, Department of Civil and Environmental Engineering,University of California, 2000.6. K. Lu, Y. Jin, Y. Chen, Y. Yang, L. Hou, Z. Zhang, Z. Li,and C. Fu. Review for order reduction based on properorthogonal decomposition and outlooks of applicationsin mechanical systems.
Mechanical Systems and SignalProcessing , 123:264–297, May 2019.7. M. Rathinam and L.R. Petzold. A New Look at ProperOrthogonal Decomposition.
SIAM Journal on NumericalAnalysis , 41(5):1893–1925, January 2003.8. I.T. Jolliffe and J. Cadima. Principal component analysis:a review and recent developments.
Philosophical Trans-actions of the Royal Society A , 374(2065):20150202,April 2016.9. Y.C. Liang, H.P. Lee, S.P. Lim, W.Z. Lin, K.H. Lee,and C.G. Wu. Proper orthogonal decomposition and itsapplications–Part I: Theory.
Journal of Sound and Vi-bration , 252(3):527–544, May 2002. 10. M. Eiermann, O.G. Ernst, and E. Ullmann. Computa-tional aspects of the stochastic finite element method.
Computing and Visualization in Science , 10(1):3–15,February 2007.11. Y. Saad.
Numerical methods for large eigenvalue prob-lems . Number 66 in Classics in applied mathematics. So-ciety for Industrial and Applied Mathematics, Philadel-phia, rev. ed edition, 2011.12. K.E. Atkinson.
The Numerical Solution of Integral Equa-tions of the Second Kind . Cambridge University Press, 1edition, June 1997.13. R.G. Ghanem and P.D. Spanos.
Stochastic Finite Ele-ments: A Spectral Approach . Springer New York, NewYork, NY, 1991.14. S. Rahman. A Galerkin isogeometric method forKarhunen–Lo`eve approximation of random fields.
Com-puter Methods in Applied Mechanics and Engineering ,338:533–561, August 2018.15. A. Bressan and S. Takacs. Sum factorization techniquesin Isogeometric Analysis.
Computer Methods in AppliedMechanics and Engineering , 352:437–460, August 2019.16. F. Auricchio, L. Beir˜ao Da Veiga, T. J. R. Hughes, A. Re-ali, and G. Sangalli. Isogeometric collocation methods.
Mathematical Models and Methods in Applied Sciences ,20(11):2075–2107, November 2010.17. D. Schillinger, J.A. Evans, A. Reali, M.A. Scott, andT.J.R. Hughes. Isogeometric collocation: Cost compari-son with Galerkin methods and extension to adaptive hi-erarchical NURBS discretizations.